IBM Netcool Operations Insight Version 1 Release 6

Integration Guide

IBM

SC27-8601-13

Note Before using this information and the product it supports, read the information in Appendix B, “Notices,” on page 727.

This edition applies to version 1.6.1 of IBM® Netcool® Operations Insight® (product number 5725-Q09) and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright International Business Machines Corporation 2020, 2020. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents

About this publication...... vii Accessing terminology online...... vii Technical training...... vii Typeface conventions ...... vii

Chapter 1. Solution overview...... 1 What's new...... 1 Deployment modes...... 4 Products and components on cloud...... 4 Products and components on premises...... 5 V1.6.1 Product and component version matrix...... 9 About Netcool Operations Insight...... 14 About Operations Management...... 14 About Network Management...... 18 About Performance Management...... 23 About Service Management...... 23

Chapter 2. Deployment...... 25 Deploying on premises...... 25 Scenarios for Operations Management...... 25 Scenarios for Network Management...... 30 On-premises physical deployment...... 31 Deploying on Red Hat OpenShift...... 32

Chapter 3. Installing...... 35 Installing on-premises...... 35 Planning for an on-premises installation...... 35 Downloading for on-premises installation...... 41 Installing on premises...... 41 Uninstalling on premises...... 98 Uninstalling Event Analytics...... 99 Installing on Red Hat OpenShift only...... 100 Preparing...... 102 Installing...... 118 Post-installation tasks...... 130 Uninstalling operators...... 134 Deploying on a hybrid architecture...... 136 Preparing...... 136 Installing...... 157 Post-installation tasks...... 176 Uninstalling a hybrid installation...... 180

Chapter 4. Upgrade and roll back...... 183 Upgrade and roll back on premises...... 183 Updated versions in the V1.6.1 release...... 183 Downloading product and components...... 184 Applying the latest fix packs...... 186 Roll back on-premises Netcool Operations Insight from V1.6.1 to V1.6.0.3...... 187 Upgrading Event Analytics...... 187 Upgrading Network Performance Insight...... 191

iii Installing on-premises Agile Service Manager...... 192

Chapter 5. Configuring...... 193 Cloud and hybrid systems...... 193 Enabling SSL communications from Netcool/Impact on OpenShift...... 193 Connecting a Cloud system to event sources...... 194 Configuring analytics...... 205 Configuring integrations...... 210 On-premises systems...... 262 Connecting event sources...... 262 Configuring Operations Management...... 264

Chapter 6. Getting started...... 347 Cloud and hybrid systems...... 347 Which GUI to log into...... 347 Logging into Netcool Operations Insight...... 347 Accessing the main navigation menu...... 349 Accessing the Getting started page...... 350 On-premises systems...... 351 Getting started with Netcool Operations Insight...... 351 Getting started with Networks for Operations Insight...... 352

Chapter 7. Administering...... 353 Cloud and hybrid systems...... 353 Administering users...... 353 Loading data...... 361 Administering the Events page...... 369 Administering topology...... 369 Administering policies...... 369 Managing runbooks and automations...... 375 On-premises systems...... 404 Administering Event Analytics...... 404

Chapter 8. Operations...... 467 Cloud and hybrid systems...... 467 Resolving events...... 467 Working with topology...... 485 Monitoring operational efficiency...... 498 On-premises systems...... 499 Managing events with IBM Netcool/OMNIbus Web GUI...... 499 Using Event Search...... 499

Chapter 9. Troubleshooting...... 651 Cloud systems...... 651 Version information for pods running on OpenShift...... 651 Installation...... 652 Operations...... 663 Hybrid systems ...... 666 Errors on command line after installation...... 666 Error sorting columns in Event Viewer on hybrid deployments...... 667 Console integration fails in hybrid deployment...... 668 Manage Policies screen has broken policy hyperlinks...... 669 Event Viewer has missing analytics icons...... 669 Error when you try to open Incident Viewer (hybrid HA)...... 669 Runbook automation and Netcool/Impact integration: ObjectServer maximum row size error.....669 On-premises systems...... 670 JVM memory usage at capacity...... 670 iv Event analytics...... 671 Troubleshooting Event Search...... 678

Chapter 10. Reference...... 683 Accessibility features...... 683 API...... 683 Runbook Automation API...... 683 Audit log files...... 689 Config maps...... 690 Primary Netcool/OMNIbus ObjectServer configmap...... 690 Backup Netcool/OMNIbus ObjectServer configmap...... 691 Netcool/Impact core server configmap...... 692 Netcool/Impact GUI server configmap...... 694 Proxy configmap...... 695 LDAP Proxy configmap...... 696 Dashboard Application Services Hub configmap ...... 696 Gateway for Message Bus configmap ...... 697 Configuration share configmap...... 699 Cassandra configmap...... 699 ASM-UI configmap...... 700 cloud native analytics gateway configmap...... 700 CouchDB configmap...... 700 Kafka configmap...... 700 Zookeeper configmap...... 700 Event reference...... 700 Column data from the ObjectServer...... 700 Column data from other sources...... 700 Insight packs...... 704 Notices...... 704 Trademarks...... 706

Appendix A. Release Notes...... 707

Appendix B. Notices...... 727 Trademarks...... 728

v vi About this publication

This guide contains information about how to integrate the components of the IBM Netcool Operations Insight solution.

Accessing terminology online The IBM Terminology Web site consolidates the terminology from IBM product libraries in one convenient location. You can access the Terminology Web site at the following Web address: http://www.ibm.com/software/globalization/terminology.

Technical training For technical training information, refer to the following IBM Skills Gateway site at https:// www-03.ibm.com/services/learning/ites.wss/zz-en?pageType=page&c=a0011023 .

Typeface conventions This publication uses the following typeface conventions: Bold • Lowercase commands and mixed case commands that are otherwise difficult to distinguish from surrounding text • Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons, list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs, property sheets), labels (such as Tip and Operating system considerations) • Keywords and parameters in text Italic • Citations (examples: titles of publications, diskettes, and CDs) • Words defined in text (example: a nonswitched line is called a point-to-point line) • Emphasis of words and letters (words as words example: "Use the word that to introduce a restrictive clause."; letters as letters example: "The LUN address must start with the letter L.") • New terms in text (except in a definition list): a view is a frame in a workspace that contains data • Variables and values that you must provide: ... where myname represents ... Monospace • Examples and code examples • File names, programming keywords, and other elements that are difficult to distinguish from surrounding text • Message text and prompts addressed to the user • Text that the user must type • Values for arguments or command options Bold monospace • Command names, and names of macros and utilities that you can type as commands • Environment variable names in text • Keywords • Parameter names in text: API structure parameters, command parameters and arguments, and configuration parameters

© Copyright IBM Corp. 2020, 2020 vii • Process names • Registry variable names in text • Script names

viii IBM Netcool Operations Insight: Integration Guide Chapter 1. Solution overview

Read about key concepts and capabilities of IBM Netcool Operations Insight. Related reference Release notes IBM Netcool Operations Insight V1.6.1 is available. Compatibility, installation, and other getting-started issues are addressed in these release notes.

What's new Netcool Operations Insight V1.6.1 includes a range of new features and functions. This description of new features and functions is also available in the Release notes.

New product features and functions in V1.6.1 Updated product versions in V1.6.1 The Netcool Operations Insight V1.6.1 solution includes features delivered by the fix packs and extensions of the products and versions listed in the following topic: “V1.6.1 Product and component version matrix” on page 9 The products are available for download from Passport Advantage® and Fix Central. For more information about the new features in these products and components, see the following topics:

What's new in... Link

Red Hat OpenShift https://access.redhat.com/documentation/en-us/ openshift_container_platform/4.4/html/release_notes/index

IBM Cloud Platform Common https://www.ibm.com/support/knowledgecenter/en/SSHKN6/ Services kc_welcome_cs.html

The cloud event management https://www.ibm.com/support/knowledgecenter/en/SSURRN/ service com.ibm.cem.doc/em_whatsnew.html

The runbook automation service https://www.ibm.com/support/knowledgecenter/SSZQDR/ com.ibm.rba.doc/GS_whatsnew.html

IBM Agile Service Manager https://www.ibm.com/support/knowledgecenter/en/ SS9LQB_1.1.8/ProductOverview/r_asm_whatsnew.html

IBM Tivoli® Netcool/OMNIbus https://www.ibm.com/support/knowledgecenter/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/ omnibus/wip/install/reference/omn_prodovr_whatsnew.html

IBM Tivoli Netcool/Impact https://www.ibm.com/support/knowledgecenter/ SSSHYH_7.1.0/com.ibm.netcoolimpact.doc/whatsnew.html

IBM Operations Analytics - Log https://www.ibm.com/support/knowledgecenter/ Analysis SSPFMY_1.3.6/com.ibm.scala.doc/overview/ovr-whats_new.html

IBM Tivoli Network Manager https://www.ibm.com/support/knowledgecenter/ SSSHRK_4.2.0/overview/concept/ovr_whatsnew.html

IBM Tivoli Netcool Configuration https://www.ibm.com/support/knowledgecenter/ Manager SS7UH9_6.4.2/ncm/wip/common/reference/ ncm_ovr_whatsnew.html

© Copyright IBM Corp. 2020, 2020 1 What's new in... Link

IBM Network Performance https://www.ibm.com/support/knowledgecenter/ Insight SSCVHB_1.3.1/release_notes/ cnpi_release_notes.html#cnpi_release_notes__whatsnew

Deprecated features in V1.6.1 The following features and functions are deprecated in the Netcool Operations Insight V1.6.1 product: Netcool Operations Insight on IBM Cloud Private Support for deploying Netcool Operations Insight on IBM Cloud Private was removed in V1.6.1. IBM Operations Analytics - Log Analysis Support for deploying IBM Operations Analytics - Log Analysis with Netcool Operations Insight on IBM Cloud Private was removed in V1.6.1. IBM Operations Analytics - Log Analysis is still supported in an on-premises deployment of Netcool Operations Insight. New features in V1.6.1 The following features and functions are available in the Netcool Operations Insight V1.6.1 product: OpenShift Support for deploying Netcool Operations Insight on Red Hat OpenShift V4.3 and V4.4.3 was added in V1.6.1. Platform Support for deploying Netcool Operations Insight on Red Hat OpenShift on the following cloud platforms was added in V1.6.1: • Amazon Web Services (AWS) • Google Cloud Platform • Microsoft Azure • RedHat OpenShift Kubernetes Service (ROKS) Operator installation Netcool Operations Insight on Red Hat OpenShift V1.6.1 is installed with operators. For more information, see “Installing Netcool Operations Insight on Red Hat OpenShift” on page 118. Probable cause You can use a new analytics capability that is designed to identify the event with the greatest probability of being the cause of the event group. For more information, see “Configuring probable cause” on page 207. Topology correlation You can create topology templates to generate defined topologies, which search your topology database for instances that match its conditions. For more information, see “Configuring topological correlation” on page 208. Temporal correlation enhancements Temporal correlation analytics were enhanced in this release to also detect patterns that are based on the identified temporal groups. Temporal pattern analysis identifies patterns of behavior among temporal groups, which are similar, but occur on different resources. Subsequent events, which match the pattern and occur on a new common resource, are grouped together. For more information, see “Configure temporal correlation” on page 207. Documentation changes in V1.6.1 The table of contents has been completely overhauled to make it more task oriented. There are also brand new sections to describe the content of the new Netcool Operations Insight user interface, such as the new Events page. Here the key changes and additions: • Expanded Configuring section, which now includes information on how to configure analytics, event source integrations, automation types for runbooks, and API keys. • Expanded Getting started section, which now includes information on how to access the new main Netcool Operations Insight user interface.

2 IBM Netcool Operations Insight: Integration Guide • Expanded Administering section, which now includes information on how to administer the new Events page, analytics policies, and runbooks. • New Operations section containing all relevant operations tasks. For operations teams working with Cloud and hybrid systems, this section includes event monitoring tasks in the new Events page, as well as monitoring topology and operations efficiency dashboards. • The Configuring, Getting started, Administering, Operations sections have been divided into separate Cloud and hybrid systems, and On-premises systems sections, to enable you to easily find the tasks that you need. The following table provides a mapping from key sections of the previous version to the new location of that documentation in the V1.6.1 table of contents.

Table 1. Location of key documentation sections in previous version and the new location of that documentation in V1.6.1 Location in V1.6.0 documentation Location in V1.6.1 documentation Configuring > Configuring Operations Management Configuring > On-premises systems > Configuring > Configuring Event Search Operations Management > “Configuring Event Search” on page 301 Configuring > Configuring Network Management > Configuring > On-premises systems > Configuring Configuring the Network Manager Insight Pack Operations Management > Configuring Topology Search > “Configuring Topology Search ” on page 338

Configuring > Configuring integration to IBM Configuring > On-premises systems > Configuring Connections Operations Management > “Configuring integration to IBM Connections” on page 339

Event search Configuring > On-premises systems > Configuring Operations Management > “Configuring Event Search” on page 301 Operations > On-premises systems > “Using Event Search” on page 499

Event Analytics Installing > Installing on premises > Installing on- premises > Installing Operations Management > “Installing Event Analytics” on page 53 Installing > Installing on premises > “Uninstalling Event Analytics” on page 99 Configuring > On-premises systems > Configuring Operations Management > “Configuring Event Analytics” on page 264 Administering > On-premises systems > “Administering Event Analytics” on page 404

Topology search Configuring > On-premises systems > Configuring Operations Management > “Configuring Topology Search” on page 332 Operations > On-premises systems > “Using Topology Search” on page 503

Networks for Operations Insight Operations > On-premises systems > “IBM Networks for Operations Insight” on page 505

Chapter 1. Solution overview 3 Deployment modes Netcool Operations Insight can be deployed on premises, on Red Hat OpenShift, or in a hybrid deployment. On premises In this mode, all of the Netcool Operations Insight products and components are installed onto servers and use the native computing resources of those servers. Netcool Operations Insight installed on-premises is called Operations Management. For more information, see “Deploying on premises” on page 25. Red Hat OpenShift You can install Netcool Operations Insight on OpenShift. Distinct features within Netcool Operations Insight products and components, such as probes, the ObjectServer, Web GUI, run within containers, and communication between these containers is managed and orchestrated using OpenShift running on a Kubernetes cluster. For more information about deploying on OpenShift, see “Deploying on Red Hat OpenShift” on page 32. Hybrid deployment You can deploy cloud native Netcool Operations Insight components on OpenShift and integrate them with an on-premises deployment of Operations Management.For more information, see “Deploying on a hybrid architecture” on page 136.

Products and components on Red Hat OpenShift Learn about the architecture of a deployment of IBM Netcool Operations Insight on OpenShift. The OpenShift cluster is made up of virtual machines or nodes. There are three or more master nodes, one or more infrastructure nodes, and three or more worker nodes where the Kubernetes pods are deployed. Each pod has one or more containers. The master node(s) provide management services and control the worker nodes in the cluster, and the infrastructure node(s) run services such as Docker that are required by the cluster. Note: Operations Analytics - Log Analysis is not available on cloud-based systems. It is only available on- premises, or on-premises in a hybrid installation. The following figure shows the architecture of a deployment of Netcool Operations Insight on OpenShift.

Figure 1. Architecture of Netcool Operations Insight on OpenShift deployment.

4 IBM Netcool Operations Insight: Integration Guide Elements of the diagram are described here: Routes A Netcool Operations Insight on OpenShift deployment requires several routes to be created; to direct traffic from clients, such as web browsers, to the Netcool Operations Insight services, and also for services to communicate internally. For a full list of routes, run the oc get routes command on a deployed instance of Netcool Operations Insight. Commonly used URLs: • Cloud GUI: https://netcool.custom-resource.cluster-domain • Dashboard Application Services Hub: https://netcool.custom-resource.cluster-domain/ibm/console • IBM Tivoli Netcool/Impact: https://impact.custom-resource.cluster-domain/ibm/console Container Platform This is the underlying Red Hat OpenShift system on which the Kubernetes cluster is deployed. IBM Cloud® Pak supports the IBM Netcool Operations Insight workload, together with other deployed workloads. For IBM Netcool Operations Insight the cluster is made up of a minimum number of virtual machines, which are deployed as a master node (including management, proxy, and boot functions), and worker nodes within the cluster, together with a local storage file system. For more information, see the following documentation: • Red Hat Product Documentation for OpenShift Container Platform V4.4 https://access.redhat.com/ documentation/en-us/openshift_container_platform/4.4/ IBM Netcool Operations Insight cluster The diagram displays an installation on a container platform, where the IBM Netcool Operations Insight cluster is deployed as containerized IBM Netcool Operations Insight applications within pods. The cluster is made up of a set of virtual machines that serve as nodes within the cluster. There is one master node, one management node, and the remaining virtual machines serve as worker nodes, where the Kubernetes pods and containers, which are known as workloads, are deployed. You can also create namespaces within your cluster. This enables multiple independent installations of IBM Netcool Operations Insight within the cluster, with each installation deployed in a separate namespace. Interaction with the cluster is managed by the master node, as follows: • Administration of the cluster is performed by connecting to the master node, either with the catalog UI, or with the Kubernetes command-line interface, kubectl. • Users log in to applications provided by the pods and containers within the cluster, such as Web GUI, by opening a web browser and navigating to a URL made up of the master node hostname and the port number used by the relevant application. Kubernetes manages how pods are deployed across the worker nodes and orchestrates communication between the pods. Pods are only deployed on worker nodes that meet the minimum resource requirements that are specified for that pod. Kubernetes uses a mechanism called affinity to ensure that pods that must be deployed on different worker nodes are deployed correctly. For example, it ensures that the primary ObjectServer container is deployed on a different worker node to the backup ObjectServer container. Storage within the cluster Storage must be created prior to your deployment of Netcool Operations Insight on OpenShift.

Products and components on premises Review the products and components included in Netcool Operations Insight. IBM Netcool Operations Insight includes the product and component versions listed in the following topic: “V1.6.1 Product and component version matrix” on page 9. This topic also includes information on the eAssemblies and fix packs required to download and install.

Chapter 1. Solution overview 5 Product and component details Tivoli Netcool/OMNIbus core components V8.1.0.22 This product includes the following components. It is installed by Installation Manager. It is part of the base Netcool Operations Insight solution, so it must be installed, configured, and running before you can start the Networks for Operations Insight feature setup. • Server components (includes ObjectServers) • Probe and gateway feature • Accelerated Event Notification (AEN) client For systems requirements, see http://ibm.biz/Bd2LHA . Important: The ObjectServer that manages the event data must be at V8.1.0. Tivoli Netcool/OMNIbus Web GUI V8.1.0.19 This component includes the following subcomponents and add-ons. It is installed by Installation Manager. It is part of the base Netcool Operations Insight solution. The following extensions to the Web GUI are supplied in Netcool Operations Insight: • Tools and menus for integration with Operations Analytics - Log Analysis. • Extensions for Netcool Operations Insight: This supports the Event Analytics capability. Important: Both the Impact Server Extensions and the Web GUI extensions must be installed for the Event Analytics capability to work. The Web GUI is installed into Dashboard Application Services Hub, which is part of Jazz® for Service Management. Jazz for Service Management is distributed as separate installation features in Installation Manager. For systems requirements, see http://ibm.biz/Bd2LHt . Db2® Enterprise Server Edition database Db2 is the default database used for the Netcool Operations Insight solution. Other types of databases are also possible. • Db2 Enterprise Server Edition V11.1 is for use with Operations Management components. For systems requirements, see http://ibm.biz/Bd2L4E . • Db2 Enterprise Server Edition V11.5 is also available. Gateway for JDBC This product is required for the base Netcool Operations Insight solution. It is installed by Installation Manager. The system requirements are the same as for Tivoli Netcool/OMNIbus V8.1. It is required for the transfer of event data from the ObjectServer to the IBM Db2 database. Netcool/Impact V7.1.0.19 This product includes the following components. It is part of the base Netcool Operations Insight solution. It is installed by Installation Manager. • Impact server • GUI server • Impact Server extensions: Includes the policies that are used to create the event analytics algorithms and the integration to IBM Connections. Important: Both the Impact Server Extensions and the Web GUI extensions must be installed for the Event Analytics capability to work. For system requirements, see http://ibm.biz/Bd2L4Y . IBM Operations Analytics - Log Analysis V1.3.6 Netcool Operations Insight works with IBM Operations Analytics - Log Analysis V1.3.6. IBM Operations Analytics - Log Analysis is part of the base Netcool Operations Insight solution. It is installed by Installation Manager. For system requirements, search for "Hardware and software requirements" within the relevant version of IBM Operations Analytics - Log Analysis at https:// www.ibm.com/support/knowledgecenter/SSPFMY .

6 IBM Netcool Operations Insight: Integration Guide Note: Operations Analytics - Log Analysis Service Desk Extension V1.1.0 is available with IBM Operations Analytics - Log Analysis V1.3.6. Note: Operations Analytics - Log Analysis Standard Edition is included in Netcool Operations Insight. For more information about Operations Analytics - Log Analysis editions, search for "Editions" at the Operations Analytics - Log Analysis Knowledge Center, at https://www.ibm.com/support/ knowledgecenter/SSPFMY . OMNIbusInsightPack_v1.3.1 for IBM Operations Analytics - Log Analysis This product is part of the base Netcool Operations Insight solution. It is required to enable the event search capability in Operations Analytics - Log Analysis. The Insight Pack is installed into Operations Analytics - Log Analysis. Gateway for Message Bus V8.0 This product is part of the base Netcool Operations Insight solution. It is installed by Installation Manager. The system requirements are the same as for Tivoli Netcool/OMNIbus V8.1.0.22. It is used for the following purposes: • Transferring event data to the IBM Operations Analytics - Log Analysis product. • Supports the transfer of event data to Agile Service Manager by integrating with the Agile Service Manager Event Observer. Jazz for Service Management V1.1.3.7 This component provides the GUI framework for the Netcool Operations Insight solution. It is installed by Installation Manager, and it includes the following subcomponents. • Dashboard Application Services Hub V3.1.3.6 • Reporting Services (previously called Tivoli Common Reporting) Note: For the cumulative patch to use for this version of Jazz for Service Management, see the web page for the relevant version of Netcool Operations Insight at this location: https://www.ibm.com/ developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Netcool%20OMNIbus/page/ Release%20details For the system requirements for Dashboard Application Services Hub, see http://ibm.biz/BdiVYN. The instance of Dashboard Application Services Hub hosts the V8.1 Web GUI and the Seasonal Event Reports portlet. Jazz for Service Management is included in the Web GUI installation package but is installed as separate features. You can set up Network Manager and Netcool Configuration Manager to work with Reporting Services by installing their respective reports when installing the products. Netcool/OMNIbus V8.1.0.22 and later can be integrated with Reporting Services V3.1 to support reporting on events. To configure this integration, connect Reporting Services to a relational database through a gateway. Then, import the report package that is supplied with Netcool/OMNIbus into Reporting Services. For more information about event reporting, see the Netcool/OMNIbus documentation, http://www-01.ibm.com/support/ knowledgecenter/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/task/ omn_con_ext_deploytcrreports.html . Network Manager IP Edition V4.2.0.10 This product includes the core and GUI components for the optional Networks for Operations Insight feature. For system requirements, see the following links: • http://www-01.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/itnm/ip/wip/install/task/ nmip_pln_planninginst.html • http://ibm.biz/Bd2L4h Network Manager Insight Pack V1.3.0.0 for IBM Operations Analytics - Log Analysis This product is part of the Networks for Operations Insight feature. It is required to enable the topology search capability in Operations Analytics - Log Analysis. The Insight Pack is installed into Operations Analytics - Log Analysis. It requires that the OMNIbusInsightPack_v1.3.1 is installed.

Chapter 1. Solution overview 7 Note: The Network Manager Insight Pack V1.3.0.0 can share a data source with the OMNIbusInsightPack_v1.3.1 only. It cannot share a data source with previous versions of the Tivoli Netcool/OMNIbus Insight Pack. Probe for SNMP This product is optional for the base Netcool Operations Insight solution. It is used in environments that have SNMP traps. It is required for the Networks for Operations Insight feature. For installations of the probe on the Tivoli Netcool/OMNIbus V8.1 server, use the instance of the probe that installs with IBM Installation Manager. Syslog Probe This product is optional for the base Netcool Operations Insight solution. It is required for the Networks for Operations Insight feature. For installations of the probe on the Tivoli Netcool/OMNIbus V8.1 server, use the instance of the probe that installs with IBM Installation Manager. Netcool Configuration Manager V6.4.2.11 This product has the following components. It is part of the optional Networks for Operations Insight feature. • Core components • Drivers • OOBC component For system requirements, see http://ibm.biz/Bd2L4J . IBM Network Performance Insight V1.3.1 Network Performance Insight is a network traffic performance monitoring system. It provides comprehensive and scalable visibility on network traffic with visualization and reporting of network performance data for complex, multivendor, multi-technology networks. The end user is able to perform the following tasks: visualize flow across selected interfaces, display performance anomaly events in the Tivoli Netcool/OMNIbus Event Viewer, and view performance anomaly and performance timeline data in the Device Dashboard. For more information, see the Network Performance Insight Knowledge Center, http://www-01.ibm.com/support/knowledgecenter/SSCVHB/welcome . IBM Alert Notification IBM Alert Notification provides instant notification of alerts for any critical IT issues across multiple monitoring tools. It gives IT staff instant notification of alerts for any issues in your IT operations environment. For more information, see the IBM Alert Notification Knowledge Center, http:// www-01.ibm.com/support/knowledgecenter/SSY487/com.ibm.netcool_OMNIbusaas.doc_1.2.0/ landingpage/product_welcome_alertnotification.html .

More information For more information about the component products of Netcool Operations Insight, see the websites listed in the following table.

Table 2. Product information Product Website IBM Netcool Operations http://www.ibm.com/support/knowledgecenter/SSTPTP/welcome Insight IBM Tivoli Netcool/OMNIbus http://www-01.ibm.com/support/knowledgecenter/SSSHTQ/ and Web GUI landingpage/NetcoolOMNIbus.html IBM Tivoli Netcool/Impact http://www-01.ibm.com/support/knowledgecenter/SSSHYH/welcome IBM Operations Analytics - http://www-01.ibm.com/support/knowledgecenter/SSPFMY/welcome Log Analysis Jazz for Service Management http://www.ibm.com/support/knowledgecenter/SSEKCU/welcome IBM Tivoli Network Manager Network Manager Knowledge Center

8 IBM Netcool Operations Insight: Integration Guide Table 2. Product information (continued) Product Website IBM Tivoli Netcool http://www-01.ibm.com/support/knowledgecenter/SS7UH9/welcome Configuration Manager Network Performance http://www-01.ibm.com/support/knowledgecenter/SSCVHB/welcome Insight Agile Service Manager https://www-01.ibm.com/support/knowledgecenter/SS9LQB/welcome runbook automation http://www-01.ibm.com/support/knowledgecenter/SSZQDR/ com.ibm.rba.doc/RBA_welcome.html alert notification http://www.ibm.com/support/knowledgecenter/SSY487/ com.ibm.netcool_OMNIbusaas.doc_1.2.0/landingpage/ product_welcome_alertnotification.html

V1.6.1 Product and component version matrix This page lists the products and components that are supported in the current on premises release of IBM Netcool Operations Insight, and the packages for Netcool Operations Insight on Red Hat OpenShift. The current version is 1.6.1. Only this combination of product and component releases is supported.

Download docs • Netcool Operations Insight V1.6.1 on OpenShift: https://www.ibm.com/support/pages/node/6221238

• Netcool Operations Insight V1.6.1 on premises solution: https://www.ibm.com/support/pages/node/ 6221236

Installing on OpenShift This section lists the Netcool Operations Insight on OpenShift V1.6.1 eAssemblies.

Table 3. Netcool Operations Insight on OpenShift V1.6.1 eAssembly eAssembly name Download from Passport Advantage

Netcool Operations Insight V1.6.1 Cloud Paks CJ5MZEN eAssembly

Parts Download the parts from the CJ5MZEN Netcool Operations Insight V1.6.1 Cloud Paks eAssembly depending on your requirements. • CC6QFML: Always download this part. You can also download the following probe and gateway parts from the CJ5MZEN eAssembly: • CC714ML: IBM Netcool Operations Insight Event Integrations v1.0.0 Linux containers. For more information about the Netcool Operations Insight Event Integrations Operator and the integrations that you can use the operator to deploy, see: https://www.ibm.com/support/knowledgecenter/SSSHTQ/ omnibus/operators/all_operators/ops_intro.html For more information about probes and gateways, see the IBM Tivoli Netcool/OMNIbus integrations Knowledge Center:https://www.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/common/ kc_welcome-444.html

Chapter 1. Solution overview 9 Installing on premises The following sections list the Netcool Operations Insight on premises V1.6.1 product and component versions.

Supported versions In general, only where we provide the details of the supported versions should they be considered as being supported as part of the Netcool Operations Insight solution for a particular release. The reasons for this are as follows: • The features of a given release of Netcool Operations Insight are delivered using the versions of the component software in that release. If you do not have the latest versions of the component software, then you will not get all of the features in the Netcool Operations Insight release. • Testing is done only on the component versions that make up a Netcool Operations Insight release, and this combination of products is what is supported. If you install a different set of component versions as part of your Netcool Operations Insight solution, then this is not a tested combination and we do not guarantee to support it.

Version information Click a link below to display the relevant version information. • Product and component versions • Integrations

Product and component versions The following table lists the Netcool Operations Insight on premises V1.6.1 product and component versions. Note: If a table cell in either the Download from Passport Advantage column or the Download from Fix Central column is empty then there is nothing to download from that location for that particular product or component. Install Netcool Operations Insight 1.6.1 using IBM Installation Manager V1.8.5.

Table 4. Netcool Operations Insight on premises V1.6.1 product and component versions Download from Product or Change from Passport Download from Fix component Version previous release? Advantage Central

IBM Tivoli Netcool/ V8.1.0.22 No CJ5N1EN V8.1.0 Fix Pack 22 OMNIbus core components

Tivoli Netcool/ V8.1.0.19 Yes V8.1.0 Fix Pack 19 OMNIbus Web GUI

IBM Tivoli Netcool/ V8.0 No CC5UHEN OMNIbus 8 Plus Gateway for Message Bus

IBM Tivoli Netcool/ V8.0 Yes CC70ZEN OMNIbus 8 Plus Gateway for JDBC

10 IBM Netcool Operations Insight: Integration Guide Table 4. Netcool Operations Insight on premises V1.6.1 product and component versions (continued) Download from Product or Change from Passport Download from Fix component Version previous release? Advantage Central

IBM Tivoli Netcool/ V8.0 No CN1FLEN OMNIbus 8 Plus JDBC Gateway Configuration Scripts

IBM Tivoli Netcool/ V7.1.0.19 Yes CJ5N2EN V7.1.0 Fix Pack 19 Impact

Db2 V11.1 Yes CJ3INML Db2 V11.1 CJ7QGEN Advanced Workgroup Server Edition for IBM Tivoli Netcool/OMNIbus and IBM Tivoli Netcool/Impact eAssembly. For use with Operations Management components V11.5 IBM Db2 Server Edition V11.5 for Netcool Operations Insight V1.6.1

Operations Analytics V1.3.6 No CJ5N3EN - Log Analysis

Operations Analytics V1.1.0 No CJ1NHEN - Log Analysis Note: Only available Service Desk with CJ5N3EN Extension

IBM Operations V1.3.6 No CJ47JEN Analytics Advanced Insights Multiplatform English eAssembly

Chapter 1. Solution overview 11 Table 4. Netcool Operations Insight on premises V1.6.1 product and component versions (continued) Download from Product or Change from Passport Download from Fix component Version previous release? Advantage Central

Event Analytics IBM Tivoli Netcool/ Yes Included in Netcool/ Impact Server Impact V7.1.0.19 Extensions for IBM Netcool Operations Insight_7.1.0.19

IBM Netcool Yes Included in Web GUI Operations Insight V8.1.0.19 Extension for IBM Tivoli Netcool/ OMNIbus Web GUI_8.1.0.19

Event Search Tivoli Netcool/ No CNS6GEN OMNIbus Insight Included in Pack V1.3.1 Operations Analytics - Log Analysis V1.3.6 eAssembly.

Tivoli Netcool/ No CN8IPEN OMNIbus Insight Included in Pack V1.3.0.2 Operations Analytics - Log Analysis V1.3.6 eAssembly.

Topology Search Network Manager No CNZ43EN Insight Pack Included in V1.3.0.0 Operations Analytics - Log Analysis eAssembly.

IBM Tivoli Network V4.2.0.10 Yes CJ5N4EN V4.2.0 Fix Pack 10 Manager IP Edition

Device Dashboard V1.1.0.2 No CJ1U2EN V1.1 Fix Pack 2

Network Health V4.2 No CJ0S2EN V4.2 Dashboard

IBM Tivoli Netcool V6.4.2.11 Yes CJ5N5EN V6.4.2 Fix Pack 11 Configuration Manager

IBM Network V1.3.1 No CJ5N6EN Performance Insight

IBM Agile Service V1.1.8 Yes CJ5N7EN Manager

12 IBM Netcool Operations Insight: Integration Guide Table 4. Netcool Operations Insight on premises V1.6.1 product and component versions (continued) Download from Product or Change from Passport Download from Fix component Version previous release? Advantage Central

IBM Agile Service V1.1.8 Yes CJ5N8EN Manager Observers

Jazz for Service V1.1.3.7 Yes CJ49VML V1.1.3.7 Management

WebSphere® V8.5.5.17 Yes Included in Jazz for V8.5.5 Fix Pack 17 Service Management.

™ Java SDK for V8.0.5.6 No WebSphere Application Server ® IBM Cognos V11 No CJ5LIML Analytic Server

IBM Cognos V11 No CJ5LLML Analytics

IBM Cognos V11 No CJ5LKML Software Development Kit

IBM Cognos V11 No CJ5LJML Analytics Samples

Integrations The following table lists the products that can be integrated with NOI, together with the versions of these products that are compatible with this release NOI. Note: If a table cell in either the Download from Passport Advantage column or the Download from Fix Central column is empty then there is nothing to download from that location for that particular product or component.

Table 5. Products that can be integrated with Netcool Operations Insight Product Version Download from Passport Download from Fix Advantage Central

IBM Tivoli Monitoring V6.3.0 CRS7JML

IBM Tivoli Monitoring CRS7KML Agents

IBM Tivoli Monitoring V6.3.0.2 CRYY6ML Agents for Tivoli Network Manager IP Edition V4.2

Tivoli Business Service V6.1.1.5 CRL8FML Fix Pack 5 Manager

Chapter 1. Solution overview 13 About Netcool Operations Insight IBM Netcool Operations Insight consists of a base operations management solution. It can be optionally extended by integrating Network Management, Performance Management, and Service Management solution extensions. The full name of the Netcool Operations Insight base solution is Operations Management for Operations Insight. This base solution provides the capability of monitoring the health and performance of IT and network infrastructure across local, cloud and hybrid environments. It also incorporates strong event management capabilities, and leverages real-time alarm and alert analytics, combined with broader historic data analytics. You can optionally extend this base solution by adding the following solution extensions: Network Management for Operations Insight Network Management adds network discovery, visualization, event correlation, topology-based root- cause analysis, and configuration and compliance management capabilities. It also adds network dashboarding and topology search capabilities. The extension is provided by integrating the Network Manager and Netcool Configuration Manager products. Performance Management for Operations Insight Performance Management adds performance management capability, including a wide range of dashboarding and flow capabilities . The extension is provided by integrating the Network Performance Insight product. Service Management for Operations Insight Service Management adds service management capability, including up-to-date visibility and control over dynamic infrastructure and services. For example, you can query a specific networked resource, and then presents a configurable topology view of it within its ecosystem of relationships and states, both in real time and within a definable time window. The extension is provided by integrating the Agile Service Manager product.

About Operations Management Use this information to understand more about the Netcool Operations Insight base solution.

Operations Management capabilities Use this information to understand the capabilities of Operations Management. Operations Management is made up of the following products and components: • IBM Tivoli Netcool/OMNIbus • Tivoli Netcool/OMNIbus Web GUI • IBM Tivoli Netcool/Impact • IBM Operations Analytics - Log Analysis(on-premises or hybrid) • Event Analytics (on premises) • The cloud native analytics service (on Red Hat OpenShift) • Event Search • IBM Connections Integration Operations Management leverages real-time alarm and alert analytics, combined with broader historic data analytics. Netcool Operations Insight is powered by the fault management capabilities of IBM Tivoli Netcool/OMNIbus and IBM's leading big data technologies within IBM Operations Analytics - Log Analysis, providing powerful event search and historical analysis in a single solution. Operations Management integrates infrastructure and operations management into a single coherent structure across business applications, virtualized servers, network devices and protocols, internet protocols, and security and storage devices.

14 IBM Netcool Operations Insight: Integration Guide Capabilities of a full Netcool Operations Insight on Red Hat OpenShift deployment The cloud native analytics service The cloud native analytics service provides correlation and enrichment capabilities based on historical and pattern analysis of your historical data. IBM Connections Integration Netcool/Impact enables social collaboration through IBM Connections by automatically providing updates to key stake holders.

Capabilities of a hybrid Netcool Operations Insight on Red Hat OpenShift deployment The cloud native analytics service The cloud native analytics service provides correlation and enrichment capabilities based on historical and pattern analysis of your historical data. Event Search Event search applies the search and analysis capabilities of Operations Analytics - Log Analysis to events that are monitored and managed by Tivoli Netcool/OMNIbus.

Capabilities of an Operations Management on-premises deployment Event Analytics (on premises) Event Analytics (on premises) provides analysis of event data to correlate and group events, and reduce the number of events that are presented to the operator. Event Search Event search applies the search and analysis capabilities of Operations Analytics - Log Analysis to events that are monitored and managed by Tivoli Netcool/OMNIbus. IBM Connections Integration Netcool/Impact enables social collaboration through IBM Connections by automatically providing updates to key stake holders.

Operations Management tasks Use this information to understand the tasks that users can perform using Operations Management. Operations Management tasks fall into the following categories: • Event Search tasks • Event Analytics tasks

Event Search tasks (on-premises/hybrid only) Using Event Search, Operations staff can use the analytics available in Operations Analytics - Log Analysis to determine how the monitoring environment is performing over time. Using Event Search Network operators can diagnose and triage events in the Event Viewer by using the search and analysis capabilities within Event Search. An example of this is the use of Event Search to narrow down the cause of an event storm by running the Event Search dashboard and timeline tools against selected events in the Event Viewer. Configuring Event Search Administrators can make extra event data available within Event Search to provide Operations with a more semantically rich set of data to use in Event Search dashboard and timeline tools. This, in turn, helps operators to more effectively diagnose and triage events using Event Search. Administrators can also customize Event Search dashboards to enable Operations to more effectively analyze event data.

Chapter 1. Solution overview 15 Event Analytics (NOI on premises) tasks Using Event Analytics, Operations staff can determine event patterns, groups, and seasonality, and use this knowledge to build rules that create parent and synthetic events, thereby reducing event count and presenting operators with events that are closer to the underlying incidents. Using Event Analytics Operations staff can review generated analytics reports, and drill into the reports to see seasonality graphs, related event groups, and event patterns. Based on an analysis of the report data, they can set up rules to act on live events and thereby reduce event count and improve the quality of the events in the Event Viewer. Configuring Event Analytics Administrators can customize Event Analytics in a variety of ways: • Making custom data available within seasonal and related event reports to provide Operations with a richer set of analytics data. • Changing the mechanism used by seasonality to suppress events. • Configuring how event pattern processing is performed. In addition, administrators can set up configuration scans to run against historical data over a specified time range. They can specify which type of analytic to run and can set up a schedule so that analytics reports are automatically generated for Operations.

cloud native analytics (NOI on a container platform) tasks cloud native analytics analyzes historic and live event data to create event grouping policies that reduce the number of events displayed in the Event Viewer, and present operators with events that are closer to the underlying incident. Using cloud native analytics cloud native analytics analyzes historic and live event data and suggests policies to group events together in a more coherent way for the operator than ungrouped single events. Policies group events in temporal groups where events usually occur together, and in seasonal groups where events consistently occur at the same times or dates. Operators can also create scope-based policies that group events together based on a common attribute such as a resource (scope-based grouping). Users can choose which policies to deploy, and these policies then group incoming events under a parent event, from which the user can drill down to individual events if required. Scheduled training runs ensure that the grouping policies maintain their relevance to the stream of incoming events. Configuring cloud native analytics Administrators can customize cloud native analytics in various ways: • Determine the event data to be analyzed to generate policies. • Compare and rank the policy groupings. • Reject or approve policies for deployment.

Topology Analytics on a container platform tasks (Requires IBM Agile Service Manager extension.) With Topology Analytics, events that have an associated resource in the topology are enriched with topological information. Users can see topological context when they are investigating an incident, and drill down into the topology that relates to an event, enabling faster identification and resolution of problems. Users can use the following features: • Use the Event Viewer to see when an event is associated with a status in Agile Service Manager. • Use the Incident Viewer with an embedded topology hop view to see the relationship between an incident's events and any associated topology. • Launch to a full Topology Viewer for all topologically enriched events. Related tasks Using Event Search

16 IBM Netcool Operations Insight: Integration Guide Related reference Customizing the Netcool/OMNIbus Insight Pack You can customize Event Search to your specific needs by customizing the Netcool/OMNIbus Insight Pack . For example, you might want to send an extended set of Netcool/OMNIbus event fields to Operations Analytics - Log Analysis and chart results based on those fields.

Operations Management on premises data flow Use this information to understand how event data is retrieved from a monitored application environment and transferred between the products and components of the base Netcool Operations Insight in order to provide Event Analytics and Event Search capabilities. The following figure shows a simplified data flow between the products of the base Netcool Operations Insight solution.

Figure 2. Data flow for the Netcool Operations Insight on premises base solution.

The stages of this data flow are as follows, indicated by the call-out graphics (for example, 1 ). Capture of alert data Probes monitor the devices and applications in the environment. 1 : Alerts are received from applications and devices Alert data is captured by the probes and forwarded to the Netcool/OMNIbus ObjectServer. Event data is then manipulated in various data flows. Web GUI data flow Event data is enriched and visualized in Web GUI. 2 : Event data is read from the ObjectServer and enriched Netcool/Impact reads the event data from the ObjectServer. In Netcool/Impact, the event data is enriched by information retrieved by Impact policies. 3 : Event data is visualized and managed in the Web GUI The Web GUI displays the application events that are in the ObjectServer. From the event lists, you can run tools that changes the event data; these changes are synchronized with the data in the ObjectServer. Event Analytics data flow Event data is archived and historical event data is used to generate analytics data.

Chapter 1. Solution overview 17 4 : Events are read from the ObjectServer by the Gateway for JDBC The Gateway for JDBC reads events from the ObjectServer. 5 : Event data is archived The Gateway for JDBC sends the event data via an HTTP interface to the Historical Event Database. The figure shows an IBM Db2 database but any supported database can be used. The gateway must be configured in reporting mode. This data flow is a prerequisite for the event analytics capability. 6 : Event analytics algorithms run on archived event data After a set of historical alerts is archived, the seasonality algorithms of the Netcool/Impact policies can generate seasonal reports. The related events function analyzes Netcool/OMNIbus historical event data to determine which events have a statistical tendency to occur together and can therefore be grouped into related event groups. Pattern functions analyze the statistically related event groups to determine if the groups have any generic patterns that can be applied to events on other network resources. 7 : Analytics data is visualized and managed The seasonality function helps you identify and examine seasonal trends while monitoring and managing events. This capability is delivered in a Seasonal Events Report portlet in Dashboard Application Services Hub. The portlet contains existing seasonal reports, which can be used to identify the seasonal pattern of the events in the Event Viewer. You can create new seasonal reports and edit existing ones. Statistically related groups can be analyzed in the Related Events GUI. Validated event groups can be deployed as Netcool/Impact correlation rules. Patterns in the statistically related event groups can also be analyzed in the Related Events GUI. These patterns can be extracted and deployed as Netcool/Impact generalized patterns. Event Search data flow Event data is indexed in Operations Analytics - Log Analysis and used to display event dashboard and timelines. 8 : Events are read from the ObjectServer by Gateway for Message Bus The Gateway for Message Bus reads events from the ObjectServer. 9 : Event data is transferred for indexing to Operations Analytics - Log Analysis The Gateway for Message Bus sends the event data via an HTTP interface to the Operations Analytics - Log Analysis product where the event data is indexed. The Tivoli Netcool/OMNIbus Insight Pack V1.3.0.0 parses the event data into a format suitable for use by Operations Analytics - Log Analysis. The diagram shows the default IDUC connection, which sends only event inserts. For event inserts and reinserts, the Accelerated Event Notification client can be deployed, which can handle greater event volumes. See “On-premises scenarios for Operations Management” on page 26. 10 : Event search data is visualized Event search results are visualized in Operations Analytics - Log Analysis event dashboards and timelines by performing right-click tools from event lists in Web GUI. Related information Tivoli Netcool/OMNIbus architecture IBM Operations Analytics - Log Analysis architecture Overview of Netcool/Impact deployments

About Network Management Use this information to understand more about the Network Management for Operations Insight solution extension.

Network Management capabilities Use this information to understand the capabilities of Network Management. Network Management is made up of the following products and components: • Network Manager • Netcool Configuration Manager

18 IBM Netcool Operations Insight: Integration Guide • Topology Search Networks for Operations Insight is an optional solution extension that can be added to a deployment of the base Netcool Operations Insight solution to provide service assurance in dynamic network infrastructures. The capabilities of Networks for Operations Insight include network discovery, visualization, event correlation and root-cause analysis, and configuration and compliance management. It contributes to overall operational insight into application and network performance management. The Networks for Operations Insight capability is provided through the Network Manager and Netcool Configuration Manager products. The components and capabilities of Network Management are described below: Network Health Dashboard This dashboard leverages the capabilities of Network Manager, Netcool/OMNIbus, and Netcool Configuration Manager products to display availability, performance, event, and configuration data for selected network views. The Network Health Dashboard displays device and interface availability within that network view. It also reports on performance by presenting graphs, tables, and traces of KPI data for monitored devices and interfaces. A dashboard timeline reports on device configuration changes and event counts, enabling correlation of events with configuration changes. The dashboard includes the Event Viewer for more detailed event information. Topology Search This capability provides insight into network performance by determining lowest cost routes between two endpoints on the network over time. The topology search capability is an extension of the Networks for Operations Insight feature. It applies the search and analysis capabilities of Operations Analytics - Log Analysis to give insight into network performance. Events that have been enriched with network data are analyzed by the Network Manager Insight Pack and are used to calculate the lowest- cost routes between two endpoints on the network topology over time. The events that occurred along the routes over the specified time period are identified and shown by severity. The topology search requires the Networks for Operations Insight feature to be installed and configured.

Network Management tasks Use this information to understand the tasks that users can perform using Network Management. Network Management tasks fall into the following categories: • Custom dashboard development tasks • Network Health Dashboard tasks • Topology Search tasks

Custom dashboard development tasks Administrators can create pages that act as "dashboards" for displaying information on the status of parts of your network, and they can edit existing dashboards, such as the Network Health Dashboard. They can select from the widgets that are provided with Network Manager, Tivoli Netcool/OMNIbus Web GUI, and also from other products that are deployed in your Dashboard Application Services Hub environment.

Network Health Dashboard tasks The Network Health Dashboard displays device and interface availability within that network view. It also reports on performance by presenting graphs, tables, and traces of KPI data for monitored devices and interfaces. A dashboard timeline reports on device configuration changes and event counts, enabling correlation of events with configuration changes. The dashboard includes the Event Viewer for more detailed event information. Monitoring the network Operations staff can use Network Health Dashboard to monitor the network by selecting a network view within an area of responsibility, such as a geographical area, or a specific network service such as BGP or VPN, and reviewing the data that appears in the other widgets on the dashboard.

Chapter 1. Solution overview 19 Administering the dashboard Administrators can configure how data is displayed, and which data is displayed in the Network Health Dashboard.

Topology Search tasks This capability provides insight into network performance by determining lowest cost routes between two endpoints on the network over time. Using Topology Search Operations staff can use the analytics available in the Topology Search capability to obtain insight into network performance. For example, they can visualize the lowest-cost routes between two endpoints on the network topology over time. Configuring Topology Search Administrations staff can configure and customize these tools to match the network and alerting ecosystem. Related concepts About the Network Health Dashboard Use the Network Health Dashboard to monitor a selected network view, and display availability, performance, and event data, as well as configuration and event history for all devices in that network view. Related tasks Developing custom dashboards You can create pages that act as "dashboards" for displaying information on the status of parts of your network or edit existing dashboards, such as the Network Health Dashboard. You can select from the widgets that are provided with Network Manager, Tivoli Netcool/OMNIbus Web GUI, and also from other products that are deployed in your Dashboard Application Services Hub environment. Using Topology Search Configuring Topology Search You can configure and customize the Network Manager Insight Pack to meet your requirements.

Network Management data flow Use this information to understand how event data is retrieved from a monitored application environment and transferred between the products and components of Network Management in order to provide Topology Search, Network Health Dashboard and Device Dashboard capabilities. The following figure shows a simplified data flow between the products of Network Management and, where appropriate, on-premises Operations Management.

20 IBM Netcool Operations Insight: Integration Guide Figure 3. Simplified data flow

Collection of network topology, polling, and configuration data 1 : Network discovery is run Based on configurations set up by network administrators, Network Manager gathers data about the network. The discovery function identifies what entities, for example routers and switches, are on the network and interrogates them, for example, for connectivity information. 2 Network topology is stored Network Manager classifies and stores the network topology that was discovered in step 1 in the NCIM topology database. 3 : Network devices and interfaces are polled Based on configurations set up by network administrators, Network Manager polling policies are run to determine whether a network device is up or down, whether it exceeds key performance parameters, and identifies inter-device link faults. 4 : Changes to device configuration and policy changes are detected Netcool Configuration Managerdiscovers whether there are any changes to device configuration or policy violations. Collection and enrichment of alert data 5 : Alerts are received from applications and devices Alert data is captured by probes and forwarded to the ObjectServer.

Chapter 1. Solution overview 21 6 : Network events are generated if polls fail Network Manager generates fault alerts if device and interface polls (step 2 ) fail. Network Manager converts the results of the relevant polls into events, and sends these network events to the ObjectServer. 7 : Network configuration events are generated if device configurations change Netcool Configuration Manager generates events for the configuration changes and policy violations (referred to hereafter as network configuration events) that were detected in step 3 . Configuration change and policy violation events are sent via the Probe for SNMP to the ObjectServer. 8 : Events are enriched using topology data Network events (generated in step 6 ) and network configuration events (generated in step 7 ) are passed to the Event Gateway, where they are enriched with network topology data. For example, the system location, contact information, and product serial number can be added to the events. The events are returned to the ObjectServer. Once steps 5 to 8 are complete the Netcool/OMNIbus ObjectServer contains the application events from the probes, network events from Network Manager, and the network configuration events from Netcool Configuration Manager. Visualization of events and topology 9 Event are visualized and monitored The Tivoli Netcool/OMNIbus Web GUI displays the application events, network events, and network configuration events that are in the ObjectServer. 10 Event information is shared The event information is shared between the Web GUI and the Network Manager GUIs, for example, the Network Views and Hop View. 11 Network topology is visualized The Network Manager GUIs display the network topology data that is in the NCIM database. This data is enriched by the configuration change and policy event information from the ObjectServer. 12 Network configuration events are analyzed Configuration changes and policy violations are displayed for further analysis in the following GUIs: • Network Manager GUIs • Web GUI Event Viewer • Netcool Configuration Manager Activity Viewer, wizards, and other Netcool Configuration Manager user interfaces Using the right-click menus, operators can optionally launch-in-context across into Reporting Services, if it is installed. Reporting Services is not shown on this figure. Topology search data flow 13 Event data is transferred for indexing to Operations Analytics - Log Analysis The Gateway for Message Bus sends the event data via an HTTP interface to the Operations Analytics - Log Analysis product where the event data is indexed. The Network Manager Insight Pack parses the event data into a format suitable for use by Operations Analytics - Log Analysis. 14 Topology search data is visualized Topology search results are visualized in Operations Analytics - Log Analysis event dashboards and timelines by performing right-click actions on two nodes in the network between which the analysis is required. This is done in one of the following ways: either select two network nodes in a network map within one of the Network Manager GUIs, or two events in the Web GUI Event Viewer. Dashboard data flow 15 Network health information is visualized In the Network Health Dashboard, selection of a network view enables you to visualize availability summary data, top 10 performance data, and configuration timeline data for the

22 IBM Netcool Operations Insight: Integration Guide devices in that network view. Data used to populate the Network Health Dashboard is retrieved from the ObjectServer, Network Manager polling databases, and Netcool Configuration Manager. 16 Device and interface health is visualized You can right click to the Device Dashboard from any topology view or event list. In the Device Dashboard, selection of a device enables you to visualize top 10 performance data for the device or any of its interfaces. You can also visualize timeline data for any of the performance metrics associated with the device or any of its interfaces. Data used to populate the Device Dashboard is retrieved from the ObjectServer, Network Manager polling databases, and from Network Performance Insight. Related concepts Netcool Configuration Manager events

About Performance Management Use this information to understand more about the Performance Management for Operations Insight solution.

Capabilities of Performance Management for Operations Insight Use this information to understand the capabilities of Performance Management. The extension is made up of the following product: Network Performance Insight Network Performance Insight is a network traffic performance monitoring system. It provides comprehensive and scalable visibility on network traffic with visualization and reporting of network performance data for complex, multivendor, multi-technology networks. The end user is able to perform the following tasks: visualize flow across selected interfaces, display performance anomaly events in the Tivoli Netcool/OMNIbus Event Viewer, and view performance anomaly and performance timeline data in the Device Dashboard. For more information, see the Network Performance Insight Knowledge Center, http://www-01.ibm.com/support/knowledgecenter/SSCVHB/welcome . The components and capabilities of Performance Management are described below: Device Dashboard (networks and performance) This dashboard leverages the capabilities of Network Manager, and Netcool/OMNIbus to display event data, and top 10 performance metric data for a selected network device and its interfaces. Note: This capability requires that both Network Management and Performance Management are integrated into your Netcool Operations Insight solution. Traffic Details Dashboard Use the Traffic Details dashboard to monitor network performance and flow details for a particular interface. Network Performance Insight provides built-in and interactive dashboards that cover the entire traffic data representation. Network Performance Insight Dashboards If you are a network planner or engineer, then use Network Performance Insight Dashboardsto view top 10 information on interfaces across your network, including the following: • Congestion • Traffic utilization • Quality of service

About Service Management Use this information to understand more about the Service Management for Operations Insight solution extension.

Capabilities of Service Management for Operations Insight Use this information to understand the capabilities of Service Management. The extension is made up of the following product: Agile Service Manager

Chapter 1. Solution overview 23 Agile Service Manager provides operations teams with complete up-to-date visibility and control over dynamic infrastructure and services. Agile Service Manager lets you query a specific networked resource, and then presents a configurable topology view of it within its ecosystem of relationships and states, both in real time and within a definable time window. For more information, see the Agile Service Manager Knowledge Center, https://www-01.ibm.com/support/knowledgecenter/SS9LQB . The components and capabilities of Service Management are described below: Topology Viewer Using the Topology Viewer you can use elastic search features to easily locate and visualize near real- time and historical configurable views of multidomain topologies, including IT, network, storage, and application data. This enables you to reduce the complexity of managing modern and hybrid services, across vendors, data centers and traditional management silos. Observer and API integration Ease and rapidity of integration with any topology source is provided by means of observers and . Observers are provided for a wide range of data, including event data, Network Manager topology data, TADDM topology data, OpenStack data, multiple file formats, REST, Docker, and VMware. This ensures rapid time-to-value, by providing up-to-date visibility and control over dynamic infrastructure and services.

24 IBM Netcool Operations Insight: Integration Guide Chapter 2. Deployment

Plan your deployment of Netcool Operations Insight

Deploying on premises Use this information to understand the architecture of the Netcool Operations Insight deployment on premises. In this deployment mode, all of the Netcool Operations Insight products and components are installed onto servers and use the native computing resources of those servers.

Deployment scenarios for Operations Management When you plan a deployment, it is important to consider the relationship between the event volumes that are supported by Netcool/OMNIbus and the capacity of Operations Analytics - Log Analysis to analyze events. The scenarios available depend on whether you are installing Operations Management on premises or in a private cloud using Red Hat OpenShift.

Deployment considerations for on premises Operations Management The desired volume of events determines whether a basic, failover, or desktop architecture or a multitier architecture is deployed. The Gateway for Message Bus can be configured to support event inserts only or both inserts and reinserts. The following explains the architecture and event volume, and the event analysis capacity of Operations Analytics - Log Analysis in more detail. Note: Operations Analytics - Log Analysis Standard Edition is included in Netcool Operations Insight. For more information about Operations Analytics - Log Analysis editions, search for "Editions" at the Operations Analytics - Log Analysis Knowledge Center, at https://www.ibm.com/support/ knowledgecenter/SSPFMY . Event volume Event inserts are the first occurrence of each event and reinserts are every occurrence of each event. By default, the Gateway for Message Bus is configured to accept only event inserts from ObjectServers through an IDUC channel. To support event inserts and reinserts, you can configure event forwarding through the Accelerated Event Notification (AEN) client. Note: Event Search functionality varies as follows depending on the choice of channel: • IDUC channel: Event Search functionality is limited. Chart display functionality is fully available, but you will not be able to perform a deep dive into events or search for event modifications. • AEN channel: All Event Search functionality is available. However, as part of your Netcool/OMNIbus configuration you will also have to install triggers in the ObjectServer. For more information, search for Integrating with Operations Analytics - Log Analysis in the Gateway for Message Bus documentation. Architecture of Netcool/OMNIbus Basic, failover, and desktop architectures support low and medium capacity for analyzing events. Multitiered architectures support higher Operations Analytics - Log Analysis capacities. In a multitier architecture, the connection to the Gateway for Message Bus supports higher capacity at the collection layer than at the aggregation layer. For more information about these architectures, see the Netcool/OMNIbus documentation and also the Netcool/OMNIbus Best Practices Guide. Capacity of Operations Analytics - Log Analysis The volume of events that Operations Analytics - Log Analysis is able to handle. For the hardware levels that are required for expected event volumes, see the Operations Analytics - Log Analysis documentation at http://www-01.ibm.com/support/knowledgecenter/SSPFMY/welcome . If capacity is limited, you can use the deletion tool to remove old data.

© Copyright IBM Corp. 2020, 2020 25 Connection layer The connection layer is the layer of the multitier architecture to which the Gateway for Message Bus is connected. This consideration applies only when the Netcool/OMNIbus product is deployed in a multitier architecture. The connection layer depends on the capacity of Operations Analytics - Log Analysis. For more information about multitier architectures, see the Netcool/OMNIbus documentation and also the Netcool/OMNIbus Best Practices Guide. Related information Netcool/OMNIbus Best Practices GuideFor provisioning and sizing advice, refer to the planning chapter in the Netcool/OMNIbus Best Practices Guide.

On-premises scenarios for Operations Management This topic presents the scenarios available in a deployment of Operations Management on premises together with the associated architectures. The deployment scenarios and associated architecture diagrams are shown below. • “Deployment scenarios” on page 26 • “Illustrations of architectures” on page 28

Deployment scenarios This section describes possible deployment scenarios. • “Deployment scenario 1: low capacity with IDUC channel” on page 26 • “Deployment scenario 2: medium capacity with AEN channel” on page 26 • “Deployment scenario 3: medium capacity with IDUC channel” on page 27 • “Deployment scenario 4: high capacity with IDUC channel” on page 27 • “Deployment scenario 5: high capacity with AEN channel” on page 27 • “Deployment scenario 6: very high capacity with AEN channel” on page 27

Deployment scenario 1: low capacity with IDUC channel

Table 6. Inserts only, standard architecture, low capacity Event volume Architecture Capacity of Connection IDUC or AEN Illustration of of Netcool/ Operations layer this OMNIbus Analytics - architecture Log Analysis Inserts only Basic, failover, Low Not applicable IDUC See Figure 4 on and desktop page 28. architecture Disregard the reference to reinserts in item 1 .

Deployment scenario 2: medium capacity with AEN channel

Table 7. Inserts and reinserts, standard architecture, medium capacity Event volume Architecture Capacity of Connection IDUC or AEN Illustration of of Netcool/ Operations layer this OMNIbus Analytics - architecture Log Analysis Inserts and Basic, failover, Medium Not applicable AEN See Figure 4 on reinserts and desktop page 28. architecture

26 IBM Netcool Operations Insight: Integration Guide Deployment scenario 3: medium capacity with IDUC channel

Table 8. Inserts only, multitier architecture, medium capacity Event volume Architecture Capacity of Connection IDUC or AEN Illustration of of Netcool/ Operations layer this OMNIbus Analytics - architecture Log Analysis Inserts only Multitier Medium Aggregation IDUC See Figure 5 on layer page 29. Disregard the reference to reinserts in item 1 .

Deployment scenario 4: high capacity with IDUC channel

Table 9. Inserts only, multitier architecture, high capacity Event volume Architecture Capacity of Connection IDUC or AEN of Netcool/ Operations layer OMNIbus Analytics - Log Analysis Inserts only Multitier High Collection layer IDUC See Figure 6 on page 30. Disregard the reference to reinserts in item 1 .

Deployment scenario 5: high capacity with AEN channel

Table 10. Inserts and reinserts, multitier architecture, high capacity Event volume Architecture Capacity of Connection IDUC or AEN of Netcool/ Operations layer OMNIbus Analytics - Log Analysis Inserts and Multitier High Aggregation AEN See Figure 5 on reinserts layer page 29.

Deployment scenario 6: very high capacity with AEN channel

Table 11. Inserts and reinserts, multitier architecture, very high capacity Event volume Architecture Capacity of Connection IDUC or AEN of Netcool/ Operations layer OMNIbus Analytics - Log Analysis Inserts and Multitier Very high Collection layer AEN See Figure 6 on reinserts page 30.

Chapter 2. Deployment 27 Illustrations of architectures The following sections show the architecture of Operations Analytics - Log Analysis deployments and how they fit into the various architectures of Netcool/OMNIbus deployments with the Gateway for Message Bus. The data source that is described in the figures is the raw data that is ingested by the Operations Analytics - Log Analysis product. You define it when you configure the integration between the Operations Analytics - Log Analysis and Netcool/OMNIbus products. • “Basic, failover, and desktop architectures” on page 28 • “Multitier architecture, events are sent from the Aggregation layer” on page 28 • “Multitier architecture, events are sent from the Collection layer” on page 29

Basic, failover, and desktop architectures The following figure shows how the integration works in a basic, failover, or desktop Netcool/OMNIbus architecture. This figure is an illustration of the architectures that are described in Table 6 on page 26 and Table 7 on page 26. In the case of the architecture in Table 6 on page 26, disregard item 1 in this figure.

Figure 4. Basic, failover, and desktop deployment architecture

Multitier architecture, events are sent from the Aggregation layer The following figure shows how the integration works in a multitier Netcool/OMNIbus architecture, with events sent from the Aggregation layer. This figure is an illustration of the architectures that are described

28 IBM Netcool Operations Insight: Integration Guide in Table 8 on page 27 and Table 10 on page 27. In the case of the architecture in Table 8 on page 27, disregard item 1 in this figure.

Figure 5. Multitier architecture deployment - Aggregation layer

Multitier architecture, events are sent from the Collection layer The following figure shows how the integration works in a multitier Netcool/OMNIbus architecture, with events sent from the Collection layer. This is a best practice for integrating the components. This figure is an illustration of the architectures that are described in Table 9 on page 27 and Table 11 on page 27. In the case of the architecture in Table 9 on page 27, disregard item 1 in this figure.

Chapter 2. Deployment 29 Figure 6. Multitier architecture deployment - Collection layer (best practice)

Related concepts Overview of the standard multitiered architecture Overview of the AEN client Related tasks Sizing your Tivoli Netcool/OMNIbus deployment Configuring and deploying a multitiered architecture Installing Netcool/OMNIbus and Netcool/Impact Related reference Failover configuration Example Tivoli Netcool/OMNIbus installation scenarios (basic, failover, and desktop architectures) Related information Message Bus Gateway documentation IBM Operations Analytics - Log Analysis documentation IBM developerWorks: Tivoli Netcool OMNIbus Best PracticesClick here to access best practice documentation for Netcool/OMNIbus.

Deployment considerations for Network Management Networks for Operations Insight is an optional feature that integrates network management products with the products of the base Netcool Operations Insight solution. The Networks for Operations Insight feature includes Network Manager IP Edition and Netcool Configuration Manager. Deploying these products depends on your environment and the size and

30 IBM Netcool Operations Insight: Integration Guide complexity of your network. For guidance on deployment options, see guidance provided in the respective product documentation: • Network Manager IP Edition: https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/install/ concept/ovr_deploymentofitnm.html • Netcool Configuration Manager: https://www.ibm.com/support/knowledgecenter/ SS7UH9_6.4.2/ncm/wip/planning/concept/ncm_plan_planninginstallation.html

Example of an on-premises physical deployment Use this example to familiarize yourself with the architecture of an on-premises physical deployment of Netcool Operations Insight. The architecture described in this example can be scaled up and extended for failover, a multitiered architecture, load balancing, and clustering. This scenario assumes that there are no existing Netcool Operations Insight products in your environment, so no backup, restore, or upgrade information is given. The information supplied in this scenario is high-level and covers the most salient points and possible issues you might encounter that are specific to Netcool Operations Insight. The steps to install the Networks for Operations Insight feature are included, but skip these steps if you want to install only the base solution. This scenario is end-to-end and you must perform the tasks in the specified order. For more information about each task in this scenario, see the Related concept, task, and information links at the bottom of each page. The following figure shows the simplified installation architecture that this scenario adheres to.

Figure 7. Simplified installation architecture for the installation scenario

For information on the product and component versions supported in the current version of Netcool Operations Insight including supported fix packs, see “Products and components on premises” on page 5. Server 1 Hosts the Netcool/OMNIbus core components, the Gateway for JDBC, Gateway for Message Bus, and Netcool/Impact. Configurations are applied to the ObjectServer to support the event analytics and topology search capabilities. Event analytics is part of the base Netcool Operations Insight solution. Topology search is part of the Networks for Operations Insight feature. The default configuration of the Gateway for Message Bus is to transfer event inserts to Operations Analytics - Log Analysis through an IDUC channel. This connection can be changed to forward events reinserts and inserts through the Accelerated Event Notification client.

Chapter 2. Deployment 31 Server 2 Hosts an IBM Db2 database and Operations Analytics - Log Analysis. The Tivoli Netcool/OMNIbus Insight Pack and the Network Manager Insight Pack are installed into Operations Analytics - Log Analysis. The Tivoli Netcool/OMNIbus Insight Pack is part of the base Netcool Operations Insight solution. The Network Manager Insight Pack is part of the Networks for Operations Insight feature. The REPORTER schema is applied to the Db2 database so that events can be transferred from the Gateway for JDBC. Various installation methods are possible for Db2. For more information, see the Db2 Knowledge Center https://ibm.biz/BdEWtm . Server 3 Hosts Dashboard Application Services Hub, which is a component of Jazz for Service Management. Jazz for Service Management provides the GUI framework and the Reporting Services component. The Netcool/OMNIbus Web GUI and the Event Analytics component are installed into Dashboard Application Services Hub. In this setup Reporting Services is also installed on this server, together with parts of the Networks for Operations Insight feature: the Network Manager IP Edition GUI components, Netcool Configuration Manager, and the Agile Service Manager UI. This simplifies the configuration of the GUI server, and provides the reporting engine and the report templates provided by the products on one host. Note: You can set up Network Manager and Netcool Configuration Manager to work with Reporting Services by installing their respective reports when installing the products. Netcool/OMNIbus V8.1.0.22 and later can be integrated with Reporting Services V3.1 to support reporting on events. To configure this integration, connect Reporting Services to a relational database through a gateway. Then, import the report package that is supplied with Netcool/OMNIbus into Reporting Services. For more information about event reporting, see the Netcool/OMNIbus documentation, http:// www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/ omnibus/wip/install/task/omn_con_ext_deploytcrreports.html . Server 4 Hosts the Netcool Configuration Manager presentation and worker server, the Network Manager IP Edition core components, and the NCIM topology database, which are all components of the Networks for Operations Insight feature. This setup assumes large networks where discovering the network and creating and maintaining the network topology can require significant system resources. Server 5 Hosts the Network Performance Insight components that support the performance management feature. For information on installation and configuration of Network Performance Insight, see “Installing Performance Management” on page 91. Server 6 Hosts the Agile Service Manager components that support the service management feature, including the Agile Service Manager core and the Agile Service Manager observers. For information on installation and configuration of Agile Service Manager, see the Agile Service Manager documentation at https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/welcome_page/ kc_welcome-444.html .

Deploying on Red Hat OpenShift Learn about the deployment scenarios supported on Red Hat OpenShift. Note: The current version of Netcool Operations Insight is compatible with Red Hat OpenShift version 4.4.3 and 4.3, and all of the documentation links point to that version of the Red Hat OpenShift documentation: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/ You can install a full deployment of Netcool Operations Insight on Red Hat OpenShift. For more information, see “Installing on Red Hat OpenShift only” on page 100. You can also install a hybrid deployment of Netcool Operations Insight, with some components on- premises, and some components on Red Hat OpenShift. Configure a new or existing on-premises installation of Operations Management with cloud native Netcool Operations Insight components on Red Hat OpenShift. For more information, see “Deploying on a hybrid architecture” on page 136. Note: Integration with on-premises IBM Agile Service Manager is not supported for hybrid deployments.

32 IBM Netcool Operations Insight: Integration Guide A Netcool Operations Insight on OpenShift deployment is supported on the following platforms: • Amazon Web Services (AWS) • Google Cloud Platform • Microsoft Azure • RedHat OpenShift Kubernetes Service (ROKS) Note: When you deploy on Red Hat OpenShift, use the oc command line tool. If you install in an environment where oc isn’t available, replace it with the kubectl command line tool.

Chapter 2. Deployment 33 34 IBM Netcool Operations Insight: Integration Guide Chapter 3. Installing Netcool Operations Insight

Netcool Operations Insight can be deployed on-premises, on a supported cloud platform, or on a hybrid cloud and on-premises architecture. Netcool Operations Insight deployed on the cloud is called Operations Management.

Installing on-premises Follow these instructions to prepare for and install IBM Netcool Operations Insight on-premises. Click here to download the Netcool Operations Insight Installation Guide.

Planning for an on-premises installation Prepare of an on-premises installation of base Netcool Operations Insight and of Netcool Operations Insight solution extensions.

Ports used by products and components Use this information to understand which ports are used by the different products and components that make up the Netcool Operations Insight solution. The following table lists sample ports that you might need to configure, and provides links to Netcool Operations Insight product and component documentation where you can access detailed information.

Table 12. Default port information Product Example default ports Links

Netcool/OMNIbus Aggregation ObjectServer primary ObjectServer ports can be port configured using the Netcool Configuration wizard. See http:// Process agent ports ibm.biz/BdskVc . Gateway server port Default ports used by Netcool/ IBM Eclipse Help System server OMNIbus. See http://ibm.biz/ port BdskVr . Port numbers for individual Ports for a Netcool/OMNIbus basic Netcool/OMNIbus probes architecture. See http://ibm.biz/ BdsWi9 . Ports for a Netcool/OMNIbus basic failover architecture. See http:// ibm.biz/BdsWqw . Ports for a Netcool/OMNIbus desktop server architecture. See http://ibm.biz/BdsWqt .

Netcool/OMNIbus Web GUI Jazz for Service Management WAS Jazz for Service Management port profile availability requirements. See http://ibm.biz/BdsWzf. • HTTP port • HTTPS port Firewall Ports to open for DASH Services. See http:// www-01.ibm.com/support/ docview.wss?uid=swg21687730 for a technote.

© Copyright IBM Corp. 2020, 2020 35 Table 12. Default port information (continued) Product Example default ports Links

Netcool/Impact Netcool/Impact server Assigning Netcool/Impact ports. See http://ibm.biz/BdsWyK . • HTTP port • HTTPS port Assigning Netcool/Impact data source and service ports. See Netcool/Impact GUI http://ibm.biz/BdsWyG . • HTTP port Note: It is not possible to install • HTTPS port Netcool/Impact GUI and Jazz for Service Management using the same default port numbers (16310/16311) on the same server. In this case you must modify the port numbers during installation.

Operations Analytics - Log Analysis Application WebConsole Port Default ports used by Operations Analytics - Log Analysis: Application WebConsole Secure Port • V1.3.5: see http://ibm.biz/ BdsWyn . Database Server Port • V1.3.3: see http://ibm.biz/BdiyPy Data Collection Server Port

Db2 Enterprise Server Edition Port 50000. database Note: This port is also configurable following installation.

Network Performance Insight Ports must be assigned for the Default ports used by Network following Network Performance Performance Insight: Insight components: • V1.3.1: see http://ibm.biz/ • Ambari Metrics npi_131_ports • Hadoop Distributed File System • V1.2.3: see http://ibm.biz/ (HDFS) npi_123_ports • Apache Kafka

Related concepts Installing Db2 and configuring the REPORTER schema Netcool Operations Insight requires a Db2 database with the REPORTER schema for historical event archiving. Related tasks Installing Netcool/OMNIbus and Netcool/Impact Installing IBM Operations Analytics - Log Analysis Operations Analytics - Log Analysis supports GUI, console, and silent installations. The installation process differs for 64-bit and z/OS operating systems. Installing Network Performance Insight

36 IBM Netcool Operations Insight: Integration Guide Install Network Performance Insight by performing the steps in the Installation section of the Network Performance Insight documentation.

Checking prerequisites Before you install each product, run the IBM Prerequisite Scanner (PRS) to ensure that the target host is suitable, and no installation problems are foreseeable. Also check the maxproc and ulimit settings on the servers you are configuring to ensure they are set to the appropriate minimum values.

Before you begin • For information about hardware and software compatibility of each component, and detailed system requirements, see the IBM Software Product Compatibility Reports website: http://www-969.ibm.com/ software/reports/compatibility/clarity/index.html Tip: When you create a report, search for Netcool Operations Insight and select your version (for example, V1.4). In the report, additional useful information is available through hover help and additional links. For example, to check the compatibility with an operating system for each component, go to the Operating Systems tab, find the row for your operating system, and hover over the icon in the Components column. For more detailed information about restrictions, click the View link in the Details column. • Download IBM Prerequisite Scanner from IBM Fix Central at http://www.ibm.com/support/fixcentral/ . Search for "IBM Prerequisite Scanner". • After you download the latest available version, decompress the .tar archive into the target directory on all hosts. • On the IBM Tivoli Netcool/Impact host, set the environment variable IMPACT_PREREQ_BOTH=True so that the host is scanned for both the Impact Server and the GUI Server. For a list of all product codes, see http://www.ibm.com/support/docview.wss?uid=swg27041454

About this task Operations Analytics - Log Analysis and IBM Db2 are not supported by IBM Prerequisite Scanner. For the installation and system requirements for these products, refer to the documentation.

Procedure Using the IBM Prerequisite Scanner • On the IBM Tivoli Netcool/OMNIbus and IBM Tivoli Netcool/Impact host, run IBM Prerequisite Scanner as follows: Product Command IBM Tivoli Netcool/OMNIbus prereq_checker.sh NOC detail IBM Tivoli Netcool/Impact prereq_checker.sh NCI detail • On the host for the GUI components: Product Command Jazz for Service Management prereq_checker.sh ODP detail Dashboard Application Services Hub prereq_checker.sh DSH detail IBM Tivoli Netcool/OMNIbus Web GUI prereq_checker.sh NOW detail • On the Networks for Operations Insight host Product Command IBM Tivoli Network Manager prereq_checker.sh TNM detail IBM Tivoli Netcool Configuration Manager prereq_checker.sh NCM detail

Chapter 3. Installing Netcool Operations Insight 37 Product Command Tivoli Common Reporting prereq_checker.sh TCR detail Check the maxproc settings. • Open the following file: /etc/security/limits.d/90-nproc.conf • Set nproc to a value of 131073 Check the ulimit settings. • Open the following file: /etc/security/limits.conf • Set nofile to a value of 131073 Related tasks Installation prerequisites for Operations Analytics - Log Analysis V1.3.5 Installation prerequisites for Operations Analytics - Log Analysis V1.3.3 System requirements for Db2 products Installation requirements for Db2 products

Obtaining IBM Installation Manager Perform this task only if you are installing directly from an IBM repository or a local repository. IBM Installation Manager is required on the computers that host Netcool/OMNIbus, Netcool/Impact, Operations Analytics - Log Analysis, and the products and components that are based on Dashboard Application Services Hub. In this scenario, that is servers 1, 2, and 3. The installation packages of the products include Installation Manager.

Before you begin Create an IBM ID at http://www.ibm.com . You need an IBM ID to download software from IBM Fix Central. Note: On Red Hat Enterprise Linux®, the GUI mode of the Installation Manager uses the libcairo UI libraries. The latest updates for RHEL 6 contain a known issue that causes the Installation Manager to crash. Before installingInstallation Manager on Red Hat Enterprise Linux 6, follow the instructions in the following technote to configure libcairo UI libraries to a supported version: http://www.ibm.com/support/ docview.wss?uid=swg21690056 Remember: The installation image of Netcool/OMNIbus V8.1.0.22 available from IBM Passport Advantage and on DVD includes Installation Manager. You only need to download Installation Manager separately if you are installing Netcool/OMNIbus directly from an IBM repository or from a local repository. You can install Installation Manager in one of three user modes: Administrator mode, Nonadministrator mode, or Group mode. The user modes determine who can run Installation Manager and where product data is stored. The following table shows the supported Installation Manager user modes for products in IBM Netcool Operations Insight.

Table 13. Supported Installation Manager user modes Product Administrator mode Nonadministrator mode Group mode IBM Tivoli Netcool/ X X X OMNIbus 1 IBM Tivoli Netcool/ X X X Impact

1 Includes OMNIbus Core, Web GUI, and the Gateways.

38 IBM Netcool Operations Insight: Integration Guide Table 13. Supported Installation Manager user modes (continued) Product Administrator mode Nonadministrator mode Group mode IBM Operations Analytics - Log X Analysis 2

Procedure The IBM Fix Central website offers two approaches to finding product files: Select product and Find product. The following instructions apply to the Find product option. 1. Go to IBM Fix Central at http://www.ibm.com/support/fixcentral/ and search for IBM Installation Manager. a) On the Find product tab, enter IBM Installation Manager in the Product selector field. b) Select V1.8.5 from the Installed Version list. c) Select your intended host operating system from the Platform list and click Continue. 2. On the Identity Fixes page, choose Browse for fixes and Show fixes that apply to this version (1.X.X.X). Click Continue. 3. On the Select Fixes page, select the installation file appropriate to your intended host operating system and click Continue. 4. When prompted, enter your IBM ID and password. 5. If your browser has Java enabled, choose the Download Director option. Otherwise, select the HTTP download option. 6. Start the installation file download. Make a note of the download location.

What to do next Install Installation Manager. See http://www.ibm.com/support/docview.wss?uid=swg24034941. Related information IBM Installation Manager overview

Installing Installation Manager (GUI or console example) You can install Installation Manager V1.8.5 with a wizard-style GUI or an interactive console, as depicted in this example.

Before you begin Take the following actions: • Extract the contents of the Installation Manager installation file to a suitable temporary directory. • Ensure that the necessary user permissions are in place for your intended installation, data, and shared directories. • The console installer does not report required disk space. Ensure that you have enough free space before you start a console installation. Before you run the Installation Manager installer, create the following target directories and set the file permissions for the designated user and group that Installation Manager to run as, and any subsequent product installations: Main installation directory Location to install the product binary files. Data directory Location where Installation Manager stores information about installed products.

2 Insight Packs are installed by the Operations Analytics - Log Analysis pkg_mgmt command.

Chapter 3. Installing Netcool Operations Insight 39 Shared directory Location where Installation Manager stores downloaded packages that are used for rollback. Ensure that these directories are separate. For example, run the following commands:

mkdir /opt/IBM/NetcoolIM mkdir /opt/IBM/NetcoolIM/IBMIM mkdir /opt/IBM/NetcoolIM/IBMIMData mkdir /opt/IBM/NetcoolIM/IBMIMShared chown -R netcool:ncoadmin /opt/IBM/NetcoolIM

About this task The initial installation steps are different depending on which user mode you use. The steps for completing the installation are common to all user modes and operating systems. Installation Manager takes account of your current umask settings when it sets the permissions mode of the files and directories that it installs. Using Group mode, Installation Manager ignores any group bits that are set and uses a umask of 2 if the resulting value is 0.

Procedure 1. Install in Group mode: a) Use the id utility to verify that your current effective user group is suitable for the installation. If necessary, use the following command to start a new shell with the correct effective group: newgrp group_name b) Use the umask utility to check your umask value. If necessary, change the umask value. c) Change to the temporary directory that contains the Installation Manager installation files. d) Use the following command to start the installation: GUI installation ./groupinst -dL data_location Console installation ./groupinstc -c -dL data_location In this command, data_location specifies the data directory. You must specify a data directory that all members of the group can access. Remember: Each instance of Installation Manager requires a different data directory. 2. Follow the installer instructions to complete the installation. The installer requires the following input at different stages of the installation: GUI installation • In the first page, select the Installation Manager package. • Read and accept the license agreement. • When prompted, enter an installation directory or accept the default directory. • Verify that the total installation size does not exceed the available disk space. • When prompted, restart Installation Manager. Console installation • Read and accept the license agreement. • When prompted, enter an installation directory or accept the default directory. • If required, generate a response file. Enter the directory path and a file name with a .xml extension. The response file is generated before installation completes. • When prompted, restart Installation Manager.

40 IBM Netcool Operations Insight: Integration Guide Results Installation Manager is installed and can now be used to install IBM Netcool Operations Insight. Note: If it is not possible for you to install Netcool Operations Insight components in GUI mode (for example, security policies at your site might limit the display of GUI pages) then you can use the Installation Manager to install the Netcool Operations Insight base solution components, which are as follows: • IBM Tivoli Netcool/OMNIbus core components • IBM Tivoli Netcool/OMNIbus 8 Plus Gateway for Message Bus • Tivoli Netcool/OMNIbus Web GUI and the Web GUI extensions for Event Analytics • IBM Tivoli Netcool/Impact and the Netcool/Impact extensions for Event Analytics However, note that the following Netcool Operations Insight base solution components cannot be installed by using the Installation Manager web application: • Dashboard Application Services Hub • Operations Analytics - Log Analysis Dashboard Application Services Hub also cannot be installed in console mode.

What to do next If required, add the Installation Manager installation directory path to your PATH environment variable. Related information IBM Installation Manager V1.8.5 documentation: Working from a web browserClick here for information on how to use the Installation Manager web server to manage your installations.

Downloading for on-premises installation You need to download products and components from Passport Advantage and Fix Central.

About this task Refer to the following topic for information on where to obtain downloads for each product and component: “V1.6.1 Product and component version matrix” on page 9. For more information, see the IBM Support Download Document for Netcool Operations Insight V1.6.1, at https://www.ibm.com/ support/pages/node/6221236 .

Installing on premises Follow these instructions to install Operations Management and optionally install the solution extensions Network Management, Performance Management, and Service Management.

Quick reference to installing Use this information as a quick reference if you are new to Netcool Operations Insight and want to perform an installation from scratch. This overview assumes detailed knowledge of the products in Netcool Operations Insight. It does not provide all the details. Links are given to more information, either within the Netcool Operations Insight documentation, or in the product documentation of the constituent products of Netcool Operations Insight. This topic lists the high-level steps for installing Netcool Operations Insight. “Installing Operations Management” on page 41 “Installing Network Management” on page 44 “Installing Performance Management” on page 46 “Installing Service Management” on page 46

Installing Operations Management You can install Operations Management on premises, or within a private cloud using Red Hat OpenShift .

Chapter 3. Installing Netcool Operations Insight 41 Installing Operations Management on premises The following table lists the high-level steps for installing Operations Management on premises. For information on the product and component versions to install, including which fix packs to apply, see “Products and components on premises” on page 5. Tip: To verify the versions of installed packages, select View Installed Packages from the File menu on the main IBM Installation Manager screen.

Table 14. Quick reference for installing Operations Management on premises It e m Action More information

1 Prepare for the installation by checking For information about hardware and software compatibility of the prerequisites. each component, and detailed system requirements, see the IBM Software Product Compatibility Reports website: http:// www-969.ibm.com/software/reports/compatibility/clarity/ index.html “Checking prerequisites” on page 37

2 Install IBM Installation Manager on http://www.ibm.com/support/fixcentral/ each host where components of the “Obtaining IBM Installation Manager” on page 38 Netcool Operations Insight are to be installed. “Installing Installation Manager (GUI or console example)” on page 39 Installation Manager is included in the compressed file distribution of IBM Tivoli Netcool/OMNIbus and Operations Analytics - Log Analysis. Download Installation Manager separately if you are installing directly from an IBM repository or from a local repository. If you need to install Installation Manager separately, you can download it from IBM Fix Central.

3 Install the Netcool/OMNIbus core “Installing Netcool/OMNIbus and Netcool/Impact” on page 48 components, and apply the latest https://ibm.biz/BdE6tr supported fix pack. Associated tasks include creating and starting https://ibm.biz/BdE6t4 ObjectServers, and setting up failover or https://ibm.biz/BdE6tF a multitier architecture. See “Products and components on premises” on page 5 for latest supported fix packs.

4 Install the Db2 database. Apply the “Installing Db2 and configuring the REPORTER schema” on REPORTER schema. page 47

5 Install the Gateway for JDBC and the Gateway for JDBC documentation: https://ibm.biz/BdE9Db Gateway for Message Bus. Gateway for Message Bus documentation: https://ibm.biz/ BdEQaD

42 IBM Netcool Operations Insight: Integration Guide Table 14. Quick reference for installing Operations Management on premises (continued) It e m Action More information

6 Install Netcool/Impact, and apply the “Installing Netcool/OMNIbus and Netcool/Impact” on page 48 latest supported fix pack. http://www-01.ibm.com/support/knowledgecenter/SSSHYH/ welcome See “Products and components on premises” on page 5 for latest supported fix packs.

7 Configure the ObjectServer to support “Configuring the ObjectServer ” on page 291 the related events function of the Event Analytics capability. Run the nco_sql utility against the relatedevents_objectserver.sql file, which is delivered with Netcool/ Impact.

8 Install a supported version of IBM http://www-01.ibm.com/support/knowledgecenter/SSPFMY/ Operations Analytics - Log Analysis. welcome Create a data source called "omnibus". See step “3” on page 304 in “Configuring integration with Operations Analytics - Log Analysis” on page 302. See “Products and components on premises” on page 5 for supported versions.

9 Configure the Gateway for Message Bus “Configuring the Gateway for JDBC and Gateway for Message as the interface between the Bus” on page 50 ObjectServer and Operations Analytics - https://ibm.biz/BdEQaD Log Analysis. Optionally configure the Accelerated Event Notification Client if you do not want to use the default IDUC channel.

10 To support Event Search, install the For more information, see “Installing the Tivoli Netcool/ latest supported version of Tivoli OMNIbus Insight Pack” on page 60. Netcool/OMNIbus Insight Pack. See “Products and components on premises” on page 5 for latest supported versions.

Chapter 3. Installing Netcool Operations Insight 43 Table 14. Quick reference for installing Operations Management on premises (continued) It e m Action More information

11 Install the Netcool/OMNIbus Web GUI, “Installing Dashboard Application Services Hub and the UI and apply the latest supported fix pack. components” on page 51 During the installation, ensure that the https://ibm.biz/BdE6kW latest supported versions are selected See “Products and components on premises” on page 5 for for the following components, based on latest supported fix packs. the information in “Products and components on premises” on page 5: • IBM WebSphere Application Server. • Jazz for Service Management. • Netcool/OMNIbus Web GUI In addition, make the following selections during the installation: • When installing Jazz for Service Management, Installation Manager discovers two required packages in the Jazz repository. Select the Jazz for Service Management extension for IBM WebSphere and Dashboard Application Services Hub packages for installation. • IBM WebSphere SDK Java Technology Edition V7.0.x. • Install the Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI feature. To install Event Analytics with seasonality reporting, ensure that install Event Analytics is selected.

12 Configure the Web GUI for integration “Configuring integration with Operations Analytics - Log with Operations Analytics - Log Analysis” on page 302 Analysis. In the server.init file, set the scala.* properties appropriately.

Back to top

Installing Network Management The following table lists the high-level steps for installing Network Management. For information on the product and component versions to install, including which fix packs to apply, see “Products and components on premises” on page 5. Tip: To verify the versions of installed packages, select View Installed Packages from the File menu on the main IBM Installation Manager screen.

44 IBM Netcool Operations Insight: Integration Guide Table 15. Quick reference for installing Network Management It e m Action More information

1 Installing the Probe for SNMP and “Installing the Probe for SNMP and Syslog Probe” on page 63 Syslog Probe for Network Manager

2 Optional: Configure the ObjectServer for “Optional: Preparing the ObjectServer for integration with integration with Network Manager by Network Manager” on page 64 obtaining the ConfigOMNI script from Important: If you have already installed Tivoli Netcool/ the Network Manager installation OMNIbus, the Netcool/OMNIbus Knowledge Library, and the package and running it against the Probe for SNMP, you can now install Network Manager, and do ObjectServer. not need to follow the steps in this task. The Network Manager installer configures Tivoli Netcool/OMNIbus for you during the installation process. If the ObjectServer setup changes after you have already installed and configured Network Manager and Tivoli Netcool/OMNIbus, then you must reintegrate the ObjectServer with Network Manager as described in this topic.

3 Prepare the topology database for use “Preparing the database for Network Manager” on page 65 by Network Manager

4 Install Network Manager core and GUI “Installing Network Manager IP Edition and Netcool components, and apply the latest Configuration Manager” on page 66 supported fix pack. More information: https://www.ibm.com/support/ knowledgecenter/SSSHRK_4.2.0/install/task/ ins_installing.html See “Products and components on premises” on page 5 for latest supported fix packs.

5 Install and configure Netcool “Installing Network Manager IP Edition and Netcool Configuration Manager and apply the Configuration Manager” on page 66 latest supported fix pack. This involves “Configuring integration with Netcool Configuration Manager” configuring the integration with Network on page 69 Manager. More information: http://www-01.ibm.com/support/ knowledgecenter/SS7UH9_6.4.2/ncm/wip/install/concept/ ncm_ins_installingncm.dita For more information about fix packs for Netcool Configuration Manager 6.4.2.11 , see https://www.ibm.com/support/ knowledgecenter/SS7UH9_6.4.2/ncm/wip/relnotes/ ncm_rn_top.html . See “Products and components on premises” on page 5 for latest supported fix packs.

6 To support Topology Search, install the For more information, see “Installing the Network Manager latest supported version of Network Insight Pack” on page 89. Manager Insight Pack and configure the See “Products and components on premises” on page 5 for connection to the NCIM topology latest supported versions. database.

Chapter 3. Installing Netcool Operations Insight 45 Table 15. Quick reference for installing Network Management (continued) It e m Action More information

7 Configure the topology search “Configuring topology search” on page 335 capability. Run nco_sql against the scala_itnm_configuration.sql file, which is delivered in the Netcool/ OMNIbus fix pack. Install the tools and menus to launch the custom apps of the Network Manager Insight Pack in the Operations Analytics - Log Analysis UI from the Web GUI.

8 Configure the Web GUI to launch the See step “3” on page 337 of “Configuring topology search” on custom apps of the Network Manager page 335. Insight Pack from the event lists.

Back to top

Installing Performance Management The following table lists the high-level steps for installing Performance Management. For information on the product and component versions to install, including which fix packs to apply, see “Products and components on premises” on page 5. Tip: To verify the versions of installed packages, select View Installed Packages from the File menu on the main IBM Installation Manager screen.

Table 16. Quick reference for installing Performance Management It e m Action More information

1 To add the performance management “Installing Performance Management” on page 91 feature, set up Network Performance Insight and follow the steps for integrating it with Netcool/OMNIbus and Network Manager.

2 Install and configure the Device “Installing the Device Dashboard” on page 93 Dashboard to view performance information.

Back to top

Installing Service Management The following table lists the high-level steps for installing Service Management.

46 IBM Netcool Operations Insight: Integration Guide Table 17. Quick reference for installing Service Management It e m Action More information 1 To add the service management feature, “Installing on-premises Agile Service Manager” on page 95 install the Agile Service Manager core, observers, and UI, and then follow the steps for integrating the observers: • Integrate the Event Observer with the Netcool/OMNIbus gateway. • Integrate the ITNM Observer with the Network Manager ncp_model Topology manager process.

Back to top

Installing Operations Management on premises Follow these instructions to install the Netcool Operations Insight base solution, also known as Operations Management for Operations Insight on premises.

Installing Db2 and configuring the REPORTER schema Netcool Operations Insight requires a Db2 database with the REPORTER schema for historical event archiving. Tip: For information on the housekeeping of historical Db2 event data, as well as sample SQL scripts, see the 'Historical event archive sizing guidance' section in the Netcool/OMNIbusBest Practices Guide, which can be found on the Netcool/OMNIbus best-practice Wiki: http://ibm.biz/nco_bps

Procedure • Obtain and download the package for the Db2 database and the Gateway configuration scripts. • Decompress the packages. Then, as the root system user, run the db2setup command to install the Db2 database on the host. The db2setup command starts the Db2 Setup wizard. Install as the root system user because the setup wizard needs to create a number of users in the process. • Run IBM Installation Manager on the Netcool/OMNIbus host and install the Gateway configuration scripts. The SQL file that is needed to create the REPORTER schema is installed to $OMNIHOME/gates/ reporting/db2/db2.reporting.sql. • In the db2.reporting.sql file, make the following changes. – Uncomment the CREATE DATABASE line. – Set the default user name and password to match the Db2 installation:

CREATE DATABASE reporter @ CONNECT TO reporter USER db2inst1 USING db2inst1 @

– Uncomment the following lines, so that any associated journal and details rows are deleted from the database when the corresponding alerts are deleted:

-- Uncomment the line below to enable foreign keys -- This helps pruning by only requiring the alert to be -- deleted from the status table , CONSTRAINT eventref FOREIGN KEY (SERVERNAME, SERVERSERIAL) REFERENCES REPORTER_STATUS(SERVERNAME, SERVERSERIAL) ON DELETE CASCADE)

This SQL appears twice in the SQL file: once in the details table definition and once in the journal table definition. Uncomment both instances.

Chapter 3. Installing Netcool Operations Insight 47 • Run the SQL file against the Db2 database by running the following command as the db2inst1 system user:

$ db2 -td@ -vf db2.reporting.sql

Result The Db2 installer creates a number of users including db2inst1. Related reference Ports used by products and components Use this information to understand which ports are used by the different products and components that make up the Netcool Operations Insight solution. Related information Installing Db2 servers using the Db2 Setup wizard (Linux and UNIX) Gateway for JDBC configuration scripts for Reporting Mode

Installing Netcool/OMNIbus and Netcool/Impact Obtain and install the Netcool/OMNIbus core components, Netcool/Impact, and the Gateway for JDBC and Gateway for Message Bus. All these products are installed by IBM Installation Manager. You can use IBM Installation Manager to download the installation packages for these products and install them in a single flow. Extra configuration of each product is required after installation.

Procedure • Install the Netcool/OMNIbus V8.1.0.22 core components. After the installation, you can use the Initial Configuration Wizard (nco_icw) to configure the product, for example, create and start ObjectServers, and configure automated failover or a multitier architecture. See related links later for instructions on installing Netcool/OMNIbus. • Install the Netcool/Impact GUI server and Impact server. See related links later for instructions on installing Netcool/Impact. • Apply the latest supported Netcool/OMNIbus core and Netcool/Impact fix packs. Also ensure that you apply the appropriate IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight feature. This is delivered in the Netcool/Impact fix pack. For information on the product and component versions supported in the current version of Netcool Operations Insight including supported fix packs, see “Products and components on premises” on page 5 The IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight feature is required for the event analytics capability. Fix packs are available from IBM Fix Central, see http://www.ibm.com/ support/fixcentral/ . • Create the connection from Netcool/Impact to the Db2 database as described in “Configuring Db2 database connection within Netcool/Impact” on page 273. • Configure the ObjectServer to support the related events function of the event analytics capability. This requires a ParentIdentifier column in the alerts.status table. Add the column using the SQL utility as described in “Configuring the ObjectServer ” on page 291. • Configure the ObjectServer to support the topology search capability. In $NCHOME/omnibus/ extensions, run the nco_sql utility against the scala_itnm_configuration.sql file.

./nco_sql -user root -password myp4ss -server NCOMS < /opt/IBM/tivoli/netcool/omnibus/extensions/scala/scala_itnm_configuration.sql

Triggers are applied to the ObjectServer that delay the storage of events until the events are enriched by Network Manager IP Edition data from the NCIM database. • Install the Gateway for JDBC and Gateway for Message Bus.

48 IBM Netcool Operations Insight: Integration Guide After installation, create the connection between the ObjectServer and the gateways in the Server Editor (nco_xigen). See related links later for instructions on creating connections in the Server Editor. • Configure the integration between Netcool/Impact and IBM Connections. This involves importing the $IMPACT_HOME/add-ons/IBMConnections/importData project into Netcool/Impact and adding IBM Connections properties to the $IMPACT_HOME/etc/ NCI_server.props file. After you edit this file, restart the Netcool/Impact server. See related links later for instructions on configuring integration to IBM Connections and restarting the Impact server.

What to do next Search on IBM Fix Central for available interim fixes and apply them. See http://www.ibm.com/support/ fixcentral/ . Related concepts Connections in the Server Editor Related tasks Installing Tivoli Netcool/OMNIbus Creating and running ObjectServers Configuring Db2 database connection within Netcool/Impact You can configure a connection to a valid Db2 database from within IBM Tivoli Netcool/Impact. Configuring the ObjectServer Prior to deploying rules based on related event events or patterns you must run SQL to update the ObjectServer. This SQL introduces relevant triggers into the ObjectServer to enable to rules to be fully functional. Configuring integration to IBM Connections IBM Tivoli Netcool/Impact provides integration to IBM Connections by using a Netcool/Impact IBMConnections action function. The IBMConnections action function allows users to query forums and topics lists, create a new forum, create a new topic, and update existing topics. The IBMConnections action function package is available in the directory $IMPACT_HOME/integrations/ IBMConnections. Restarting the Impact server Related reference On-premises scenarios for Operations Management This topic presents the scenarios available in a deployment of Operations Management on premises together with the associated architectures. Initial configuration wizard Ports used by products and components Use this information to understand which ports are used by the different products and components that make up the Netcool Operations Insight solution. Related information Installing Netcool/Impact

Installing IBM Operations Analytics - Log Analysis Operations Analytics - Log Analysis supports GUI, console, and silent installations. The installation process differs for 64-bit and z/OS operating systems.

Procedure Operations Analytics - Log Analysis can be installed by IBM Installation Manager or you can run the install.sh wrapper script. Tip: The best practice is to install the Web GUI and Operations Analytics - Log Analysis on separate hosts. Restriction: Operations Analytics - Log Analysis does not support installation in Group mode of IBM Installation Manager.

Chapter 3. Installing Netcool Operations Insight 49 What to do next • If the host locale is not set to English United States, set the locale of the command shell to export LANG=en_US.UTF-8 before you run any Operations Analytics - Log Analysis scripts. • Install the Tivoli Netcool/OMNIbus Insight Pack into Operations Analytics - Log Analysis to enable ingestion of event data into Operations Analytics. For more information, see “Installing the Tivoli Netcool/OMNIbus Insight Pack” on page 60. • (Optional) Install the Network Manager Insight Pack into Operations Analytics - Log Analysis to use the topology search capability. For more information, see “Installing the Network Manager Insight Pack” on page 89. • Search on IBM Fix Central for available interim fixes and apply them. See http://www.ibm.com/support/ fixcentral/ . Related tasks Installing Operations Analytics - Log Analysis V1.3.5 Installing Operations Analytics - Log Analysis V1.3.3 Related reference Ports used by products and components Use this information to understand which ports are used by the different products and components that make up the Netcool Operations Insight solution.

Configuring the Gateway for JDBC and Gateway for Message Bus Configure the Gateway for JDBC to run in reporting mode, so it can forward event data to the Db2 database for archiving. Configure the Gateway for Message Bus gateway to forward event data to Operations Analytics - Log Analysis and run it in Operations Analytics - Log Analysis mode.

Before you begin • Install the Db2 database and configure the REPORTER schema so that the Gateway for JDBC can connect. • Install the gateways on the same host as Tivoli Netcool/OMNIbus core components (that is, server 1). • Install Operations Analytics - Log Analysis and obtain the URL for the connection to the Gateway for Message Bus.

Procedure • Configure the Gateway for JDBC. This involves the following steps: – Obtain the JDBC driver for the target database from the database vendor and install it according to the vendor's instructions. The drivers are usually provided as .jar files. – To enable the gateway to communicate with the target database, you must specify values for the Gate.Jdbc.* properties in the $OMNIHOME/etc/G_JDBC.props file. This is the default properties file, which is configured for reporting mode, that is supplied with the gateway. Here is a sample properties file for the Gateway for JDBC.

# Reporting mode properties Gate.Jdbc.Mode: 'REPORTING' # Table properties Gate.Jdbc.StatusTableName: 'REPORTER_STATUS' Gate.Jdbc.JournalTableName: 'REPORTER_JOURNAL' Gate.Jdbc.DetailsTableName: 'REPORTER_DETAILS' # JDBC Connection properties Gate.Jdbc.Driver: 'com.ibm.db2.jcc.Db2Driver' Gate.Jdbc.Url: 'jdbc:db2://server3:50000/REPORTER' Gate.Jdbc.Username: 'db2inst1' Gate.Jdbc.Password: 'db2inst1' Gate.Jdbc.ReconnectTimeout: 30 Gate.Jdbc.InitializationString: '' # ObjectServer Connection properties Gate.RdrWtr.Username: 'root'

50 IBM Netcool Operations Insight: Integration Guide Gate.RdrWtr.Password: 'netcool' Gate.RdrWtr.Server: 'AGG_V'

• Configure the Gateway for Message Bus to forward event data to Operations Analytics - Log Analysis. This involves the following steps: – Creating a gateway server in the Netcool/OMNIbus interfaces file – Configuring the G_SCALA.props properties file, including specifying the .map mapping file. – Configuring the endpoint in the scalaTransformers.xml file – Configuring the SSL connection, if required – Configuring the transport properties in the scalaTransport.properties file • If you do not want to use the default configuration of the Gateway for Message Bus (an IDUC channel between the ObjectServer and Operations Analytics - Log Analysis and supports event inserts only), configure event forwarding through the AEN client. This support event inserts and reinserts and involves the following steps: – Configuring AEN event forwarding in the Gateway for Message Bus – Configuring the AEN channel and triggers in each ObjectServer by enabling the postinsert triggers and trigger group • Start the Gateway for Message Bus in Operations Analytics - Log Analysis mode. For example:

$OMNIHOME/bin/nco_g_xml -propsfile $OMNIHOME/etc/G_SCALA.props

The gateway begins sending events from Tivoli Netcool/OMNIbus to Operations Analytics - Log Analysis. • Start the Gateway for JDBC in reporter mode. For example:

$OMNIHOME/bin/nco_g_jdbc -jdbcreporter

• As an alternative to starting the gateways from the command-line interface, put them under process control. Related concepts Tivoli Netcool/OMNIbus process control Related information Gateway for JDBC documentation Gateway for Message Bus documentation

Installing Dashboard Application Services Hub and the UI components Install the Dashboard Application Services Hub and all the UI components. This applies to the Netcool/ OMNIbus Web GUI, the Event Analytics component, fix packs, and optionally Reporting Services. The UI components are installed in two stages. First, IBM WebSphere Application Server and Jazz for Service Management are installed, which provide the underlying UI technology. Then, the Web GUI and the extension packages that support the Event Analytics component and the event search capability are installed. After installation, configure the Web GUI to integrate with Operations Analytics - Log Analysis and support the topology search capability. You can optionally install Reporting Services V3.1 into Dashboard Application Services Hub. You can set up Network Manager and Netcool Configuration Manager to work with Reporting Services by installing their respective reports when installing the products. Netcool/OMNIbus V8.1.0.22 and later can be integrated with Reporting Services V3.1 to support reporting on events. To configure this integration, connect Reporting Services to a relational database through a gateway. Then, import the report package that is supplied with Netcool/OMNIbus into Reporting Services. For more information about event reporting, see the Netcool/OMNIbus documentation, http://www-01.ibm.com/support/knowledgecenter/

Chapter 3. Installing Netcool Operations Insight 51 SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/task/ omn_con_ext_deploytcrreports.html .

Before you begin • Obtain the packages from IBM Passport Advantage. For information about the eAssembly numbers you need for the packages, see this IBM Support document, http://www-01.ibm.com/support/docview.wss? uid=swg24043698 . • To install Reporting Services V3.1, ensure that the host meets the extra requirements at Jazz for Service Management Knowledge Center, http://www.ibm.com/support/knowledgecenter/SSEKCU_1.1.3.0/ com.ibm.psc.doc/install/tcr_c_install_prereqs.html .

Procedure 1. Start Installation Manager and install Jazz for Service Management. The packages that you need to install are as follows. Package Description IBM WebSphere Application Server Select V8.5.5.16. If V8.0.5 is also identified, clear it. V8.5.5.16 for Jazz for Service Management IBM WebSphere SDK Java Technology Edition V7.0.x. Jazz for Service Management V1.1.3.7 Select the following items for installation. • Jazz for Service Management extension for IBM WebSphere V8.5. • Dashboard Application Services Hub V3.1.3.6.

Reporting Services V3.1 This package is optional. Select it if you want to run reports for events and network management. 2. Install the packages that constitute the Web GUI and extensions. Package Description Netcool/OMNIbus Web GUI This is the base component that installs the Web GUI. Install tools and menus for event search with This package installs the tools that launch the IBM SmartCloud Analytics - Log Analysis custom apps of the Tivoli Netcool/OMNIbus Insight Pack from the event lists. Netcool Operations Insight Extensions for IBM This package installs the Event Analytics GUIs. Tivoli Netcool/OMNIbus Web GUI Netcool/OMNIbus This is the fix pack that contains the extensions for the topology search capability. Web GUI fix pack, as specified in “Products and components on premises” on page 5.

3. Configure the Web GUI. For example, the connection to a data source (ObjectServer), users, groups, and so on. You can use the Web GUI configuration tool to do this. For more information, see https://ibm.biz/ BdXqcP . 4. Configure the integration with Operations Analytics - Log Analysis. Ensure that the server.init file has the following properties set:

scala.app.keyword=OMNIbus_Keyword_Search scala.app.static.dashboard=OMNIbus_Static_Dashboard

52 IBM Netcool Operations Insight: Integration Guide scala.datasource=omnibus scala.url=protocol://host:port scala.version=1.2.0.3

If you need to change any of these values, restart the Web GUI server. 5. Set up the Web GUI Administration API client. 6. Install the tools and menus to launch the custom apps of the Network Manager Insight Pack in the Operations Analytics - Log Analysis UI from the Web GUI. In $WEBGUI_HOME/extensions/LogAnalytics, run the runwaapi command against the scalaEventTopology.xml file.

$WEBGUI_HOME/waapi/bin/runwaapi -user username -password password -file scalaEventTopology.xml

Where username and password are the credentials of the administrator user that are defined in the $WEBGUI_HOME/waapi/etc/waapi.init properties file that controls the WAAPI client.

What to do next • Search on IBM Fix Central for available interim fixes and apply them. See http://www.ibm.com/support/ fixcentral/ . • Reconfigure your views in the Web GUI event lists to display the NmosObjInst column. The tools that launch the custom apps of the Network Manager Insight Pack work only against events that have a value in this column. For more information, see http://www-01.ibm.com/support/knowledgecenter/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_cust_settingupviews.html . Related concepts Installing Event Analytics Read the following topics before you install Event Analytics. Related tasks Installing the Web GUI Restarting the server Related reference server.init properties Ports used by products and components Use this information to understand which ports are used by the different products and components that make up the Netcool Operations Insight solution.

Installing Event Analytics Read the following topics before you install Event Analytics.

Installing Event Analytics You can install Event Analytics with the IBM Installation Manager GUI or console, or do a silent installation. Event Analytics supports IBM Installation Manager 1.7.2 up to 1.8.4.

For more information about installing and using IBM Installation Manager, see the following IBM Knowledge Center: http://www-01.ibm.com/support/knowledgecenter/SSDV2W/im_family_welcome.html

Prerequisites Before you install Event Analytics you must complete the following preinstallation tasks.

Event Archiving You must be running a database with archived events. Event Analytics supports the Db2 and Oracle databases. Event Analytics support of MS SQL requires a minimum of IBM Tivoli Netcool/Impact 7.1.0.1.

Chapter 3. Installing Netcool Operations Insight 53 You can use a gateway to archive events to a database. In reporting mode, the gateway archives events to a target database. For more information, see http://www.ibm.com/support/knowledgecenter/SSSHTQ/ omnibus/gateways/jdbcgw/wip/concept/jdbcgw_intro.html . Note: The gateway can operate in two modes: audit mode and reporting mode. Event Analytics only supports reporting mode.

Browser Requirements To display the Seasonal Event Graphs in Microsoft Internet Explorer, you must install the Microsoft Silverlight plug-in.

Reduced Memory If you are not running Event Analytics on Solaris, remove the comment from or add the following entry in the jvm.options file:

#-Xgc:classUnloadingKickoffThreshold=100

Removing the comment from or adding that entry dynamically reduces memory requirements.

Netcool/Impact installation components Select the components of Netcool/Impact that you want to install. If you purchased IBM Netcool Operations Insight the Impact Server Extensions component is displayed in the list and is selected automatically. This component contains extra Impact Server features that work with IBM Netcool Operations Insight. If you accept the default selection, both the GUI Server and the Impact Server are installed on the same computer. In a production environment, install the Impact Server and the GUI Server on separate computers. So, for example, if you already installed the Impact Server on another computer, you can choose to install the GUI Server alone. The component Installation Manager is selected automatically on the system that is not already installed with Installation Manager. Netcool/Impact does not support Arabic or Hebrew, therefore Event Analytics users, who are working in Arabic or Hebrew, see some English text.

Installing Event Analytics (GUI) You can install Event Analytics with the IBM Installation Manager GUI.

Before you begin • Determine which Installation Manager user mode you require. • Ensure that the necessary user permissions are in place for your intended installation directories. • Configure localhost on the computer where Event Analytics packages are to be installed.

About this task The installation of Event Analytics requires you to install product packages for the following product groups: • IBM Tivoli Netcool/Impact • IBM Tivoli Netcool/OMNIbus • IBM Netcool. The steps for starting Installation Manager are different depending on which user mode you installed it in. The steps for completing the Event Analytics installation with the Installation Manager wizard are common to all user modes and operating systems.

54 IBM Netcool Operations Insight: Integration Guide Installation Manager takes account of your current umask settings when it sets the permissions mode of the files and directories that it installs. If you use Administrator mode or Non-administrator mode and your umask is 0, Installation Manager uses a umask of 22. If you use Group mode, Installation Manager ignores any group bits that are set and uses a umask of 2 if the resulting value is 0. To install the packages and features, complete the following steps.

Procedure 1. Start Installation Manager. Change to the /eclipse subdirectory of the Installation Manager installation directory and use the following command to start Installation Manager: ./IBMIM To record the installation steps in a response file for use with silent installations on other computers, use the -record response_file option. For example:

./IBMIM -record /tmp/install_1.xml

2. Configure Installation Manager to point to either a local repository or an IBM Passport Advantage repository, where the download packages are available. Within the IBM Knowledge Center content for Installation Manager, see the topic that is called Installing packages by using wizard mode. 3. In the main Installation Manager window, click Install and follow the installation wizard instructions to complete the installation. 4. In the Install tab select the following installation packages, and then click Next. • Packages for IBM Tivoli Netcool/Impact: IBM Tivoli Netcool/Impact GUI Server_7.1.0.19 IBM Tivoli Netcool/Impact Server_7.1.0.19 IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight_7.1.0.19 • Packages for IBM Tivoli Netcool/OMNIbus: IBM Tivoli Netcool/OMNIbus_8.1.0.22 • Packages for IBM Tivoli Netcool/OMNIbus Web GUI: IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 5. In the Licenses tab, review the licenses. If you are happy with the license content select I accept the terms in the license agreements and click Next. 6. In the Location tab, enter information for the Installation Directory and Architecture or progress with the default values, and click Next. • For IBM Netcool, the default values are /opt/IBM/netcool and 64-bit. • For IBM Netcool Impact, the default values are /opt/IBM/tivoli/impact and 64-bit. 7. In the Features tab, select the following features and then click Next. Other features are auto- selected.

Table 18. Available features Feature Description IBM Tivoli Netcool/OMNIbus Web GUI 8.1.0.19 To install and run Event Analytics. > Install base features Netcool Operations Insight Extensions Web Contains the Event Analytics components. GUI 8.1.0.19 > Install Event Analytics 8. In the Summary tab, review summary details. If you are happy with summary details click Next, but if you need to change any detail click Back. 9. To complete the installation, click Finish.

Chapter 3. Installing Netcool Operations Insight 55 Results Installation Manager installs Event Analytics.

What to do next 1. Configure the ObjectServer for Event Analytics, see “Configuring the ObjectServer ” on page 291. 2. Connect to a valid database from within IBM Tivoli Netcool/Impact. To configure a connection to one of the Event Analytics supported databases, see the following topics: • Db2: “Configuring Db2 database connection within Netcool/Impact” on page 273 • Oracle: “Configuring Oracle database connection within Netcool/Impact” on page 275 • MS SQL: “Configuring MS SQL database connection within Netcool/Impact” on page 277 3. If you add a cluster to the Impact environment, you must update the data sources in IBM Tivoli Netcool/Impact For more information, see “Configuring extra failover capabilities in the Netcool/ Impact environment” on page 293. 4. If you add a cluster to the Impact environment, you must update the data sources in IBM Tivoli Netcool/Impact. For more information, see “Configuring extra failover capabilities in the Netcool/ Impact environment” on page 293. 5. You must set up a remote connection from the Dashboard Application Services Hub to Netcool/ Impact. For more information, see remote connection.

Installing Event Analytics (Console) You can install Event Analytics with the IBM Installation Manager console.

Before you begin Obtain an IBM ID and an entitlement to download Event Analytics from IBM Passport Advantage. The packages that you are entitled to install are listed in Installation Manager. Take the following actions: • Determine which Installation Manager user mode you require. • Ensure that the necessary user permissions are in place for the installation directories. • Decide which features that you want to install from the installation packages and gather the information that is required for those features. • Configure localhost on the computer where Event Analytics is to be installed.

About this task The steps for starting Installation Manager are different depending on which user mode you installed it in. The steps for completing the Event Analytics installation with the Installation Manager console are common to all user modes and operating systems. Installation Manager takes account of your current umask settings when it sets the permissions mode of the files and directories that it installs. If you use Administrator mode or Non-administrator mode and your umask is 0, Installation Manager uses a umask of 22. If you use Group mode, Installation Manager ignores any group bits that are set and uses a umask of 2 if the resulting value is 0.

Procedure 1. Change to the /eclipse/tools subdirectory of the Installation Manager installation directory. 2. Use the following command to start Installation Manager: • ./imcl -c OR ./imcl -consoleMode 3. Configure Installation Manager to download package repositories from IBM Passport Advantage: a) From the Main Menu, select Preferences. b) In the Preferences menu, select Passport Advantage.

56 IBM Netcool Operations Insight: Integration Guide c) In the Passport Advantage menu, select Connect to Passport Advantage. d) When prompted, enter your IBM ID user name and password. e) Return to the Main Menu. 4. From the options that are provided on the installer, add the repository that you want to install. 5. From the Main Menu, select Install. Follow the installer instructions to complete the installation. The installer requires the following inputs at different stages of the installation: • Select Event Analytics • When prompted, enter an Installation Manager shared directory or accept the default directory. • When prompted, enter an installation directory or accept the default directory. • Clear the features that you do not require. • If required, generate a response file for use with silent installations on other computers. Enter the directory path and a file name with a .xml extension. The response file is generated before installation completes. 6. When the installation is complete, select Finish.

Results Installation Manager installs Event Analytics.

What to do next If you add a cluster to the Impact environment, you must update the data sources in IBM Tivoli Netcool/ Impact 7.1. For more information, see “Configuring extra failover capabilities in the Netcool/Impact environment” on page 293.

Silently installing Event Analytics You can install Event Analytics silently with IBM Installation Manager. This installation method is useful if you want identical installation configurations on multiple workstations. Silent installation requires a response file that defines the installation configuration.

Before you begin Take the following actions: • Create or record an Installation Manager response file. You can specify a local or remote IBM Tivoli Netcool/OMNIbus package and a Netcool Operations Insight Extensions Web GUI package with a repository in the response file. You can also specify that Installation Manager downloads the packages from IBM Passport Advantage. For more information about specifying authenticated repositories in response files, search for the Storing credentials topic in the Installation Manager information center: http://www-01.ibm.com/support/knowledgecenter/SSDV2W/im_family_welcome.html A default response file is included in the Event Analytics installation package in responsefiles/ platform, where platform can be unix or windows. When you record a response file, you can use the -skipInstall argument to create a response file for an installation process without performing the installation. For example: – Create or record a skipInstall:

IBMIM.exe -record C:\response_files\install_1.xml -skipInstall C:\Temp\skipInstall

• Determine which Installation Manager user mode you require. • Read the license agreement. The license agreement file, license.txt, is stored in the /native/ license_version.zip archive, which is contained in the installation package.

Chapter 3. Installing Netcool Operations Insight 57 • Ensure that the necessary user permissions are in place for your intended installation directories. • Configure localhost on the computer where Event Analytics is to be installed.

Procedure 1. Change to the /eclipse/tools subdirectory of the Installation Manager installation directory. 2. To encrypt the password that is used by the administrative user for the initial log-in to Dashboard Application Services Hub, run the following command: • ./imutilsc encryptString password Where password is the password to be encrypted. 3. To install Event Analytics, run the following command: • ./imcl -input response_file -silent -log /tmp/install_log.xml - acceptLicense Where response_file is the directory path to the response file.

Results Installation Manager installs Event Analytics.

What to do next If you add a cluster to the Impact environment, you must update the data sources in IBM Tivoli Netcool/ Impact 7.1. For more information, see “Configuring extra failover capabilities in the Netcool/Impact environment” on page 293.

Post-installation tasks You must perform these tasks following installation of Event Analytics.

Ensuring sufficient SQL connections on Derby data sources To ensure smooth running of Event Analytics, ensure you have sufficient SQL connections on the four Derby data sources. Do this by increasing the maximum number of SQL connections for each data source.

About this task To avoid running out of SQL connections for the Derby database and resultant slowing down of GUI response, increase the maximum number of SQL connections on the four Derby data sources from 5 to 50. The four Derby data sources are the following: • ImpactDB • NOIReportDataSource • RelatedEventsDataSourceseasonality • ReportDataSource For information on how to increase the maximum number of SQL connections on these data sources, see the Netcool/Impact topic at the bottom of the page.

Avoiding stack overflow errors on Derby data sources In order to avoid stack overflow errors on the four Derby data sources, lower the trace level.

Before you begin Before you begin this task, perform the following activities: • Ensure that the IMPACT_HOME environment variable is correctly set. • Ensure that you know the name of the Netcool/Impact server. If you have a clustered Netcool/Impact server environment, ensure that you know the name of the primary server.

58 IBM Netcool Operations Insight: Integration Guide About this task You are modifying backend settings for the four Derby data sources. The four Derby data sources are the following: • ImpactDB • NOIReportDataSource • RelatedEventsDataSourceseasonality • ReportDataSource In a clustered Netcool/Impact server environment, make these changes on the primary server only. There is no need to make the changes on every server. Changes will be automatically replicated to non-primary members of the cluster.

Procedure 1. Access the command line of the Netcool/Impact server. 2. Update the trace settings for the ImpactDB Derby data source. a) Edit the following file: $IMPACT_HOME/etc/servername_ImpactDB.ds Where servername is the name of the Netcool/Impact server. In a clustered Netcool/Impact server environment, servername is the name of the primary server. b) Add the following lines to the file:

ImpactDB.Derby.DSPROPERTY.1.NAME=traceLevel ImpactDB.Derby.DSPROPERTY.1.VALUE=0 ImpactDB.Derby.NUMDSPROPERTIES=1

c) Save and close the file. 3. Update the trace settings for the NOIReportDataSource Derby data source. a) Edit the following file: $IMPACT_HOME/etc/servername_NOIReportDataSource.ds Where servername is the name of the Netcool/Impact server. In a clustered Netcool/Impact server environment, servername is the name of the primary server. b) Add the following lines to the file:

ImpactDB.Derby.DSPROPERTY.1.NAME=traceLevel ImpactDB.Derby.DSPROPERTY.1.VALUE=0 ImpactDB.Derby.NUMDSPROPERTIES=1

c) Save and close the file. 4. Update the trace settings for the RelatedEventsDataSourceseasonality Derby data source. a) Edit the following file: $IMPACT_HOME/etc/ servername_RelatedEventsDataSourceseasonality.ds Where servername is the name of the Netcool/Impact server. In a clustered Netcool/Impact server environment, servername is the name of the primary server. b) Add the following lines to the file:

ImpactDB.Derby.DSPROPERTY.1.NAME=traceLevel ImpactDB.Derby.DSPROPERTY.1.VALUE=0 ImpactDB.Derby.NUMDSPROPERTIES=1

c) Save and close the file. 5. Update the trace settings for the ReportDataSource Derby data source. a) Edit the following file: $IMPACT_HOME/etc/servername_ReportDataSource.ds Where servername is the name of the Netcool/Impact server. In a clustered Netcool/Impact server environment, servername is the name of the primary server. b) Add the following lines to the file:

Chapter 3. Installing Netcool Operations Insight 59 ImpactDB.Derby.DSPROPERTY.1.NAME=traceLevel ImpactDB.Derby.DSPROPERTY.1.VALUE=0 ImpactDB.Derby.NUMDSPROPERTIES=1

c) Save and close the file. 6. Restart the Netcool/Impact server. In a clustered Netcool/Impact server environment, restart all of the servers in the cluster.

Managing event regrouping One of the key features of Event Analytics is the ability to correlate events together, either based on regular occurrence of the events closely together in time (related events grouping) or based on pattern analysis, and to present these events to the operators in the Event Viewer as an event group. If the parent event within this event group is deleted for any reason, then by default all of the children events become ungrouped. Follow the instructions in this topic to ensure that the events are automatically regrouped

About this task Netcool/Impact includes a policy activator service called LookForPatternOrphans. The associated policy regularly checks to see if any event group parent events have been deleted. If it finds that this has occurred, it identifies the orphan events and determines which of these events is the next most important event. It then proceeds to group these events under that next most important event.

Procedure 1. Log into the Netcool/Impact UI. 2. Click Services. 3. Click LookForPatternOrphans from the list of services on the left. 4. Click Starts automatically when server starts.

5. Click Save .

Installing the Tivoli Netcool/OMNIbus Insight Pack This topic explains how to install the Netcool/OMNIbus Insight Pack into the Operations Analytics - Log Analysis product. Operations Analytics - Log Analysis can be running while you install the Insight Pack. This Insight Pack ingests event data into Operations Analytics - Log Analysis and installs custom apps.

Before you begin • Install the Operations Analytics - Log Analysis product. The Insight Pack cannot be installed without the Operations Analytics - Log Analysis product. • Download the relevant Insight Pack installation package from IBM Passport Advantage, ensuring that the downloaded version is compatible with the installed versions of Netcool Operations Insight and Operations Analytics - Log Analysis. • Install Python 2.6 or later with the simplejson library, which is required by the Custom Apps that are included in Insight Packs.

Procedure 1. Create a new OMNIbus directory under $UNITY_HOME/unity_content. 2. Copy the Netcool/OMNIbus Insight Pack installation package to $UNITY_HOME/unity_content/ OMNIbus. 3. Unpack and install the Insight Pack, using the following command as an example:

$UNITY_HOME/utilities/pkg_mgmt.sh -install $UNITY_HOME/unity_content/OMNIbus/ OMNIbusInsightPack_v1.3.0.2.zip

60 IBM Netcool Operations Insight: Integration Guide 4. On the Operations Analytics - Log Analysis UI, use the Data Source Wizard to create a data source into which the event data is ingested. The "omnibus1100" data source can ingest data for both the Tivoli Netcool/OMNIbus Insight Pack and the Network Manager Insight Pack. a) In the Select Location panel, select Custom and type the Netcool/OMNIbus server host name. Enter the same host name that was used for the JsonMsgHostname transport property of the Gateway for Message Bus. b) In the Select Data panel, enter the following field values:

Field Value File path NCOMS. This is the default value of the jsonMsgPath transport property of the Gateway for Message Bus. If you changed this value from the default, change the value of the File path field accordingly. Type This is the name of the data source type on which this data source is based. • To use the default data source type, specify OMNIbus1100. • To use a customized data source type, specify the name of the customized data source type; for example: customOMNIbus

Collection OMNIbus1100-Collection c) In the Set Attributes panel, enter the following field values:

Field Value Name omnibus. Ensure that the value that you type is the same as the value of the scala.datasource property in the Web GUI server.init file. If the Name field has a value other than omnibus, use the same value for the scala.datasource property. Group Leave this field blank. Description Type a description of your choice.

Results The Insight Pack is installed to the directory specified in step 3. After the installation is completed, the Rule Set, Source Type, and Collection required for working with Netcool/OMNIbus events is in place. You can view these resources in the Administrative Settings page of the Operations Analytics - Log Analysis UI.

What to do next • Use the pkg_mgmt command to verify the installations of the Insight Pack. See Verifying the Tivoli Netcool/OMNIbus Insight Pack. • If you have several ObjectServers, use separate instances of the Gateway for Message Bus to connect to each ObjectServer. The best practice is for each gateway to send events to a single data source. For more information about configuring the gateway to send events to Operations Analytics - Log Analysis, see the Gateway for Message Bus documentation at https://ibm.biz/BdEQaD and search for Integrating with Operations Analytics - Log Analysis. Related concepts Data Source creation in Operations Analytics - Log Analysis V1.3.5 Data Source creation in Operations Analytics - Log Analysis V1.3.3 Related tasks Installing the Tivoli Netcool/OMNIbus Insight Pack

Chapter 3. Installing Netcool Operations Insight 61 This topic explains how to install the Netcool/OMNIbus Insight Pack into the Operations Analytics - Log Analysis product. Operations Analytics - Log Analysis can be running while you install the Insight Pack. This Insight Pack ingests event data into Operations Analytics - Log Analysis and installs custom apps. Installing the Network Manager Insight Pack This topic explains how to install the Network Manager Insight Pack into the Operations Analytics - Log Analysis product and make the necessary configurations. The Network Manager Insight Pack is required only if you deploy the Networks for Operations Insight feature and want to use the topology search capability. For more information, see “Network Manager Insight Pack” on page 334. Operations Analytics - Log Analysis can be running while you install the Insight Pack. Related information Gateway for Message Bus documentation

Installing Network Management This installation scenario describes how to set up the Networks for Operations Insight feature in the Netcool Operations Insight solution. A sample system topology is given on which the installation tasks are based. It is assumed that the core products of the Netcool Operations Insight solution are already installed and running. The information supplied in this scenario is high-level and covers the most salient points and possible issues you might encounter that are specific to the Networks for Operations Insight feature in the Netcool Operations Insight solution. This scenario is end-to-end and you should perform the tasks in the specified order. For more information, see the Related concept, task, and information links at the bottom of this topic.

Before you begin • Install the components of Netcool Operations Insight as described in “Products and components on premises” on page 5. The Networks for Operations Insight solution requires that the following products are installed, configured, and running as follows: – The Tivoli Netcool/OMNIbus V8.1 server components are installed and an ObjectServer is created and running. Ensure that the administrator user of the ObjectServer was changed from the default. – The Tivoli Netcool/OMNIbus V8.1 Web GUI is installed and running in an instance of Dashboard Application Services Hub. The ObjectServer is defined as the Web GUI data source. – An IBM Db2 database is installed and configured for event archiving, and the Gateway for JDBC is installed and configured to transfer and synchronize the events from the ObjectServer. – IBM Operations Analytics - Log Analysis is installed and running, and configured so that events are forwarded from Tivoli Netcool/OMNIbus to Operations Analytics - Log Analysis via the Gateway for Message Bus. See “Configuring integration with Operations Analytics - Log Analysis” on page 302. • Obtain the following information about the ObjectServer: – Host name and port number – Installation directory (that is, the value of the $OMNIHOME environment variable) – Name, for example, NCOMS – Administrator password • Install and configure the event search and event seasonality features. If any of the above products are not installed, or features not configured, they must be configured before you can set up the Networks for Operations Insight feature.

About this task This task and the sub-tasks describe the scenario of a fresh deployment of the products in the Networks for Operations Insight feature. The system topology is a logical sample. It is not the only system topology that can be used. It is intended for reference and to help you plan your deployment. The system topology is as follows:

62 IBM Netcool Operations Insight: Integration Guide • Tivoli Netcool/OMNIbus and Network Manager are installed on separate hosts (that is, a distributed installation). The version of Tivoli Netcool/OMNIbus is 8.1. • The ObjectServer is configured to be the user repository for the products. Note: All the products of the Netcool Operations Insight solution also support the use of an LDAP directory as the user repository. • Network Manager and Netcool Configuration Manager both use the V8.1 ObjectServer to store and manage events. • In this topology, the default Db2 v10.5 Enterprise Server Edition database is used. Related concepts Network Management data flow Use this information to understand how event data is retrieved from a monitored application environment and transferred between the products and components of Network Management in order to provide Topology Search, Network Health Dashboard and Device Dashboard capabilities. Related tasks Installing Performance Management To add the Performance Management solution extension, install Network Performance Insight and then integrate it with Netcool Operations Insight. Related reference Supported products for Networks for Operations Insight Release notes IBM Netcool Operations Insight V1.6.1 is available. Compatibility, installation, and other getting-started issues are addressed in these release notes.

Installing the Probe for SNMP and Syslog Probe The Networks for Operations Insight feature requires the Probe for SNMP and the Syslog Probe. It is important that you install the probes that are included in the entitlement for the Tivoli Netcool/OMNIbus V8.1 product. Although the probes are also available in the Network Manager IP Edition entitlement, do not install them from Network Manager IP Edition. The instances of the probes that are available with Tivoli Netcool/OMNIbus V8.1 are installed by IBM Installation Manager.

Procedure 1. Change to the /eclipse/tools subdirectory of the Installation Manager installation directory and run the following command to start Installation Manager:

./IBMIM

To record the installation steps in a response file for use with silent installations on other computers, use the -record option. For example, to record to the /tmp/install_1.xml file:

./IBMIM -record /tmp/install_1.xml

2. Configure Installation Manager to download package repositories from IBM Passport Advantage. 3. In the main Installation Manager pane, click Install and follow the installation wizard instructions to complete the installation. The installer requires the following inputs at different stages of the installation: • If prompted, enter your IBM ID user name and password. • Read and accept the license agreement. • Specify an Installation Manager shared directory or accept the default directory. Select the nco-p-syslog feature for the Syslog Probe, and select the nco-p-mttrapd feature for the Probe for SNMP. After the installation completes, click Finish.

Chapter 3. Installing Netcool Operations Insight 63 Results If the installation is successful, Installation Manager displays a success message and the installation history is updated to record the successful installation. If not, you can use Installation Manager to uninstall or modify the installation.

What to do next Ensure that both probes are configured: • For more information about configuring the Probe for SNMP, see the Tivoli Netcool/OMNIbusKnowledge Center, http://www-01.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/probes/snmp/wip/ reference/snmp_config.html . • For more information about configuring the Syslog Probe, see see the Tivoli Netcool/ OMNIbusKnowledge Center http://www-01.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/ probes/syslog/wip/concept/syslog_intro.html . Related tasks Netcool/OMNIbus V8.1 documentation: Obtaining IBM Installation Manager You can install IBM Installation Manager with a GUI or console, or do a silent installation. Before installation, you must determine which user mode you require. Installing Tivoli Netcool/OMNIbus V8.1 Related reference Tivoli Netcool/OMNIbus V8.1 installable features

Optional: Preparing the ObjectServer for integration with Network Manager If you have already installed Tivoli Netcool/OMNIbus, the Netcool/OMNIbus Knowledge Library, and the Probe for SNMP, you can now install Network Manager, and do not need to follow the steps in this task. The Network Manager installer configures Tivoli Netcool/OMNIbus for you during the installation process. If the ObjectServer setup changes after you have already installed and configured Network Manager and Tivoli Netcool/OMNIbus, then you must reintegrate the ObjectServer with Network Manager as described in this topic. To reintegrate the Network Manager product with the existing Tivoli Netcool/OMNIbus V8.1 ObjectServer, run the ConfigOMNI script against the ObjectServer.

Procedure 1. Use the ConfigOMNI script to configure an ObjectServer to run with Network Manager. The script creates the Network Manager triggers and GUI account information. If the ObjectServer is on a remote server, then copy the $NCHOME/precision/install/scripts/ConfigOMNI script and the support script $NCHOME/precision/scripts/create_itnm_triggers.sql and put them into the same directory on the remote ObjectServer. If the ObjectServer is local to Network Manager, then you can use both scripts as is. 2. On the ObjectServer host, change to the scripts directory and run the ConfigOMNI script. For example, the following configures the ObjectServer called NCOMS2 using the administrative password NC0M5password, or creates the ObjectServer called NCOMS2 if it does not exist, in the specified directory (OMNIHOME), and creates or modifies the itnmadmin and itnmuser users in the ObjectServer.

./ConfigOMNI -o NCOMS2 -p NC0M5password -h /opt/ibm/tivoli/netcool -u ITNMpassword

3. You might also need to update the Network Manager core settings and the Web GUI data source settings. For more information, see the Network Manager Knowledge Center https://www.ibm.com/ support/knowledgecenter/SSSHRK_4.2.0/install/task/ins_installingandconfiguringomnibus.html . Related tasks Installing and configuring Tivoli Netcool/OMNIbus

64 IBM Netcool Operations Insight: Integration Guide Preparing the database for Network Manager After a supported database has been installed, you must install and run the database scripts to configure the topology database for use by Network Manager IP Edition. You must run the scripts before installing Network Manager IP Edition.

About this task If you downloaded the compressed software package from Passport Advantage, the database creation scripts are included at the top level of the uncompressed software file. Copy the scripts to the database server and use them. You can also install the Network Manager IP Edition topology database creation scripts using Installation Manager by selecting the Network Manager topology database creation scripts package. The database scripts are installed by default in the precision/scripts/ directory in the installation directory (by default, /opt/IBM/netcool/core/precision/scripts/).

Procedure 1. Log in to the server where you installed Db2. 2. Change to the directory where your Db2 instance was installed and then change to the sqllib subdirectory. 3. Set up the environment by typing the following command: Shell Command Bourne . ./db2profile C source db2cshrc

The Network Manager IP Edition application wrapper scripts automatically set up the Db2 environment. 4. Locate the compressed database creation file db2_creation_scripts.tar.gz and copy it to the server where Db2 is installed. Decompress the file. 5. Change to the precision/scripts/ directory and run the create_db2_database.sh script as the Db2 administrative user for the instance (db2inst1):

./create_db2_database.sh database_name user_name -force

Where database_name is the name of the database, user_name is the Db2 user to use to connect to the database, and -force an argument that forces any Db2 users off the instance before the database is created. Important: The user_name must not be the administrative user. This user must be an existing operating system and Db2 user. For example, to create a Db2 database that is called ITNM for the Db2 user ncim, type:

./create_db2_database.sh ITNM ncim

6. After you run create_db2_database.sh, restart the database as the Db2 administrative user as follows: run db2stop and then run db2start. 7. When running the Network Manager IP Edition installer later on, make sure you select the option to configure an existing Db2 database. The Network Manager IP Edition installer can then create the tables in the database either on the local or a remote host, depending on where your database is installed. The installer populates the connection properties in the following files, you can check these files for any problems with your connection to the database: • The DbLogins.DOMAIN.cfg and MibDbLogin.cfg files in $NCHOME/etc/precision. These files are used by the Network Manager IP Edition core processes.

Chapter 3. Installing Netcool Operations Insight 65 • The tnm.properties file in $NMGUI_HOME/profile/etc/tnm. These files are used by the Network Manager IP Edition GUI.

Installing Network Manager IP Edition and Netcool Configuration Manager Install Network Manager IP Edition and Netcool Configuration Manager to form the basis of the Networks for Operations Insight feature.

Before you begin • Ensure you have installed and configured the base products and components of Netcool Operations Insight, including Tivoli Netcool/OMNIbus, Netcool/Impact, and Operations Analytics - Log Analysis, and the associated components and configurations. See Supported products for Networks for Operations Insight. • Obtain the following information about the ObjectServer: – ObjectServer name, for example, NCOMS – Host name and port number – Administrator user ID – Administrator password • Obtain the following information about your Db2 database: – Database name – Host name and port number – Administrator user ID with permissions to create tables – Administrator user password • Obtain the packages from IBM Passport Advantage. For information about the eAssembly numbers you need for the packages, see http://www-01.ibm.com/support/docview.wss?uid=swg24043698 . • Obtain the latest supported fix packs for Network Manager IP Edition and Netcool Configuration Manager from IBM Fix Central, at http://www.ibm.com/support/fixcentral/ . For information on the product and component versions supported in the current version of Netcool Operations Insight including supported fix packs, see “Products and components on premises” on page 5. • Ensure that a compatible version of Python is installed on this server before you start. On Linux, Network Manager IP Edition core components require version 2.6 or 2.7 of Python to be installed on the server where the core components are installed. On AIX®, Network Manager IP Edition requires version 2.7.5 of Python.

About this task These instructions describe the options that are presented in the Installation Manager in wizard mode. Other modes are also available with equivalent options.

Procedure 1. Start Installation Manager and install the following packages: Package Description

Network Installs the Network Manager IP Edition core components, sets up connection to Manager Core the specified ObjectServer, sets up connection to the database to be used for the Components NCIM topology and creates the tables (needs to be selected), creates Network Version Manager IP Edition default users, sets up network domain, and configures the V4.2.0.10 details for the poller aggregation. For more information, see https://www.ibm.com/support/knowledgecenter/ SSSHRK_4.2.0/install/task/ins_installingcorecomponents.html .

66 IBM Netcool Operations Insight: Integration Guide Package Description

The Network Manager IP Edition core components can be installed on server 4 of the scenario described in Performing a fresh installation.

Network Installs the Network Manager IP Edition GUI components, sets up connection to Manager GUI the specified ObjectServer, sets up connection to the NCIM topology database, Components and sets up the default users. Version For more information, see https://www.ibm.com/support/knowledgecenter/ V4.2.0.10 SSSHRK_4.2.0/install/task/ins_installingguicomponents.html . The Network Manager IP Edition GUI components can be installed on server 3 of the scenario described in Performing a fresh installation. The GUI components of other products in the solution, Netcool Configuration Manager, and Reporting Services would also be on this host.

Network Health Installs the Network Health Dashboard. Dashboard Installing the Network Health Dashboard installs the following roles, which allow V4.2.0.10 users to work with the Network Health Dashboard: • ncp_networkhealth_dashboard • ncp_networkhealth_dashboard_admin • ncp_event_analytics The new Network Health Dashboard is only available if you have Network Manager as part of Netcool Operations Insight. The Network Health Dashboard monitors a selected network view, and displays device and interface availability within that network view. It also reports on performance by presenting graphs, tables, and traces of KPI data for monitored devices and interfaces. A dashboard timeline reports on device configuration changes and event counts, enabling you to correlate events with configuration changes. The dashboard includes the Event Viewer, for more detailed event information. Note: The Network Health Dashboard must be installed on the same host as the Network Manager IP Edition GUI components.

Network Installs the reports provided by Network Manager IP Edition that you can use as Manager part of the Reporting Services feature. Reports Reporting Services requires a Db2 database to store its data. This database must V4.2.0.10 be running during installation. If the database is installed on the same server as Reporting Services, the installer configures the database during installation. If the database is on a different server, you must configure the database before you install Reporting Services. In the scenario described in Performing a fresh installation, where the Db2 database is on a different server, you must set up the remote Db2 database for Reporting Services as follows: a. From the Jazz for Service Management package, copy the TCR_generate_content_store_ db2_definition.sh script to the server where Db2 is installed. b. Run the following command:

./TCR_generate_content_store_ db2_definition.sh database_name db2_username

Chapter 3. Installing Netcool Operations Insight 67 Package Description

Where database_name is the name you want for the Reporting Services database, and db2_username is the user name to connect to the content store, that is, the database owner (db2inst1). c. Copy the generated SQL script to a temporary directory and run it against your Db2 instance as the Db2 user (db2inst1), for example:

$ cp tcr_create_db2_cs.sql /tmp/tcr_create_db2_cs.sql $ su – db2inst1 –c "db2 -vtf /tmp/tcr_create_db2_cs.sql"

Netcool Installs the Netcool Configuration Manager components and loads the required Configuration database schema. For Server Installation Type, select Presentation Server and Manager Worker Server to install both the GUI and worker servers. V6.4.2.11 For more information, see http://www-01.ibm.com/support/knowledgecenter/ SS7UH9_6.4.2/ncm/wip/install/task/ncm_ins_installingncm.html . The Netcool Configuration Manager components can be installed on server 3 of the scenario described in Performing a fresh installation. For more information about fix pack 2, see http://www.ibm.com/support/ knowledgecenter/SS7UH9_6.4.2/ncm/wip/relnotes/ncm_rn_6422.html .

Reporting Installs the reports provided by Netcool Configuration Manager (ITNCM-Reports) Services that you can use as part of the Reporting Services feature. environment 2. Apply the latest supported Network Manager IP Edition and Netcool Configuration Manager fix packs. For information on the product and component versions supported in the current version of Netcool Operations Insight including supported fix packs, see “Products and components on premises” on page 5 3. On the host where the Network Manager GUI components are installed, install the tools and menus to launch the custom apps of the Network Manager Insight Pack in the Operations Analytics - Log Analysis GUI from the Network Views. a) In $NMGUI_HOME/profile/etc/tnm/topoviz.properties, set the topoviz.unity.customappsui property, which defines the connection to Operations Analytics - Log Analysis. For example:

# Defines the LogAnalytics custom App launcher URL topoviz.unity.customappsui=https://server3:9987/Unity/CustomAppsUI

b) In the $NMGUI_HOME/profile/etc/tnm/menus/ncp_topoviz_device_menu.xml file, define the Event Search menu item. Add the item

in the file as shown:

4. Optional: Follow the steps to configure the integration between Network Manager IP Edition and Netcool Configuration Manager as described in “Configuring integration with Netcool Configuration Manager” on page 69.

Results The ports used by each installed product or component are displayed. The ports are also written to the $NCHOME/log/install/Configuration.log file.

68 IBM Netcool Operations Insight: Integration Guide What to do next • To add the performance management feature, see “Installing Performance Management” on page 91. • To set up the Device Dashboard for the network performance monitoring feature, see “Installing the Device Dashboard” on page 93. • Search on IBM Fix Central for available interim fixes and apply them. See http://www.ibm.com/support/ fixcentral/ . Related reference Installing Network Manager Related information Installing Netcool Configuration Manager V4.2 download document

Configuring integration with Netcool Configuration Manager After installing the products, you can configure the integration between Network Manager and Netcool Configuration Manager. Related tasks Installing Netcool Configuration Manager Related reference Netcool Configuration Manager release notes Installation information checklist Related information Preparing Db2 databases for Netcool Configuration Manager

User role requirements Certain administrative user roles are required for the integration. Note: For single sign-on information, see the related topic links.

DASH user roles The following DASH roles are required for access to the Netcool Configuration Manager components that are launched from within DASH, such as the Netcool Configuration Manager Wizards and the Netcool Configuration Manager - Base and Netcool Configuration Manager - Compliance clients. Either create a DASH user with the same name as an existing Netcool Configuration Manager user who already has the ‘IntellidenUser' role, or use an appropriate Network Manager user, such as itnmadmin, who is already set up as a DASH user. If you use the Network Manager user, create a corresponding new Netcool Configuration Manager user with the same name (password can differ), and assign the ‘IntellidenUser' role to this new user. Important: If a DASH user is being created on Network Manager with the same name as an existing Netcool Configuration Manager user. then they also need to be added to an appropriate Network Manager user group, or alternatively be granted any required Network Manager roles manually. Additionally, assign the following roles to your DASH user: • ncp_rest_api • ncmConfigChange • ncmConfigSynch • ncmIDTUser • ncmPolicyCheck • ncmActivityViewing • ncmConfigViewing • ncmConfigEdit • ncmDashService

Chapter 3. Installing Netcool Operations Insight 69 The following table cross-references security requirements between user interfaces, DASH roles, Netcool Configuration Manager functionality, and Netcool Configuration Manager realm content permissions. Use this information to assign DASH roles and define realm content permissions.

Table 19. UI security by DASH roles, Netcool Configuration Manager functionality, and realm content permissions Realm content Access DASH role Functionality permissions Apply Modelled ncmConfigChange Execute Configuration View, Execute Command Set Change Apply Native Command ncmConfigChange Execute Configuration View, Execute Set Change, Apply Native Command Sets Synchronize (ITNCM to ncmConfigSynch Execute Configuration View, Execute Device) Synchronization Submit Configuration ncmConfigChange Execute Configuration View, Execute Change Apply Policy ncmPolicyCheck Execute Compliance View Policy View Configuration ncmConfigViewing n/a View Edit Configuration ncmConfigEdit n/a View, Modify Compare Configuration ncmConfigViewing n/a View IDT Automatic ncmIDTUser IDT Access, IDT Allow View Auto Login IDT Manual ncmIDTUser IDT Access, IDT Allow View Manual Login Find Device n/a n/a View View UOW Log n/a n/a n/a View IDT Log n/a n/a View Activity Viewer ncmActivityViewing n/a n/a Device Synchronization ncp_rest_api n/a n/a Access DASH services ncmDashServices n/a n/a (through right-click menus)

Reporting Services user roles Reporting Services and the Netcool Configuration Manager default reports are installed together with the DASH components. Any user who needs to access reports requires the following permissions: • The relevant Reporting Services roles for accessing the Reporting node in the DASH console. Assign these roles to enable users to run reports to which they are authorized from the Reporting Services GUI. • The authorization to access the report set, and the relevant Reporting Services roles for working with the reports. Assign these permissions to enable users to run Netcool Configuration Manager reports from Network Manager topology displays, the Active Event List, and the Reporting Services GUI.

70 IBM Netcool Operations Insight: Integration Guide For information about authorizing access to a report set and assigning roles by user or group, go to the IBMTivoli Systems Management Information Center at http://www-01.ibm.com/support/ knowledgecenter/SS3HLM/welcome , locate the Reporting Services documentation node, and search for authorization and user roles.

Other user roles To configure the Alerts menu in the Web GUI, the ncw_admin role is required.

Installing the Dashboard Application Services Hub components For integrated scenarios, Netcool Configuration Manager provides the following Dashboard Application Services Hub components: The Activity Viewer, the Dashboard Application Services Hub wizards, and the Netcool Configuration Manager thick-client launch portal.

Before you begin From Version 6.4.2 onwards, Netcool Configuration Manager reporting is no longer installed as part of the Dashboard Application Services Hub components installation, but rather as part of the Netcool Configuration Manager main installation. Important: Before installing the Dashboard Application Services Hub components, install Netcool Configuration Manager using the 'Integrated' option.

About this task Restriction: The Netcool Configuration Manager Dashboard Application Services Hub components must be installed as the same user who installed Network Manager.

Procedure 1. Log onto the Dashboard Application Services Hub server. 2. Change to the /eclipse subdirectory of the Installation Manager Group installation directory and use the following command to start Installation Manager: ./IBMIM To record the installation steps in a response file for use with silent installations on other computers, use the '-record response_file' option. For example:

IBMIM -record C:\response_files\install_1.xml

3. Configure Installation Manager to download package repositories from IBM Passport Advantage: a) From the main menu, choose File > Preferences. You can set preferences for proxy servers in IBM Installation Manager. Proxy servers enable connections to remote servers from behind a firewall. b) In the Preferences window, expand the Internet node and select one of the following options: FTP Proxy Select this option to specify a SOCKS proxy host address and a SOCKS proxy port number. HTTP Proxy Select this option to enable an HTTP or SOCKS proxy. c) Select Enable proxy server. d) In the Preferences window, select the Passport Advantage panel. e) Select Connect to Passport Advantage. f) Click Apply, and then click OK. 4. In the main Installation Manager window, click Install, select IBM Dashboard Applications for ITNCM, and then follow the installation wizard instructions to complete the installation. 5. Accept the license agreement, select an installation directory, and supply the following details:

Chapter 3. Installing Netcool Operations Insight 71 Netcool Configuration Manager database details Sid/service name/database name(db2) Database hostname Port Username Password Dashboard Application Services Hub administrative credentials Dashboard Application Services Hub administrator username (default is smadmin) Dashboard Application Services Hub administrator password Network Manager administrative credentials Default is itnmadmin (or the Dashboard Application Services Hub superuser, who must have the 'ncw_admin' role in Dashboard Application Services Hub Password Netcool Configuration Manager presentation server Connection details to the Netcool Configuration Manager Presentation server A skip validation option should the Presentation server be unavailable Reporting Services server Connection details to the Reporting Services server A skip validation option should the Reporting Services server be unavailable 6. Complete the installation.

Example Tip: Best practice recommendation: You can generate a response file through Installation Manager, as in the following example:

72 IBM Netcool Operations Insight: Integration Guide

What to do next Before you can access the Netcool Configuration Manager Dashboard Application Services Hub components, you must set up the Netcool Configuration Manager Dashboard Application Services Hub users and provide them with appropriate access permission. Once users have been set up, you access the Netcool Configuration Manager Dashboard Application Services Hub components, that is, the Activity Viewer, the Dashboard Application Services Hub wizards, and the thick-client launch portal in the following ways: • You launch the stand-alone Netcool Configuration Manager UIs (sometimes referred to as the thick- client UIs), from the Dashboard Application Services Hub thick-client launch portal. • You access the Activity Viewer, the Dashboard Application Services Hub wizards, and a subset of reports in context from Network Manager and Tivoli Netcool/OMNIbus. • You access the complete reports using the Dashboard Application Services Hub Reporting Services GUI.

Configuring separate database types Under certain circumstances, such as when different or remote databases are used in an integrated environment, you must perform additional database configuration steps.

About this task If you are installing Network Manager and ITNCM-Reports together, and if the Network Manager database is Db2 and on a different server, then its component databases must be cataloged.

Chapter 3. Installing Netcool Operations Insight 73 If Network Manager uses an Informix® database in a distributed environment and Dashboard Application Services Hub is not installed on the same server as Network Manager, you ensure that the correct library jars are used.

Procedure 1. Required: If Network Manager and ITNCM-Reports are installed together, and if the Network Manager database is Db2 and on a different server: a) To connect to a Db2 database on a server remote from your TCR Installation, ensure that a Db2 client is installed and the remote database cataloged. When the database server is remote to the WebSphere Application Server node where configuration is taking place, enter the following command at the node to add a TCP/IP node entry to the node directory:

db2 catalog tcpip node remote server

where NODENAME Specifies a local alias for the node to be cataloged. REMOTE Specifies the fully qualified domain name of the remote DB server. PORT Is the port on which the database is accessible, typically port 50000.

db2 catalog database at node

where database_name Specifies the Db2 database name. NODENAME Is the local alias specified in the previous step. b) Add 'source $HOME/sqllib/db2profile' to your /.bash_profile. Where $HOME refers to the home directory of the user which was configured during the installation of the Db2 client to manage the client (usually db2inst1), and is the user who installed Netcool Configuration Manager, usually 'icosuser'. Note: The .bash_profile is only used for bash shell, and it will be different for sh, csh or ksh. c) Restart your reporting server after this update. However, before restarting the Reporting Server, check that the amended login profile has been sourced. Tip: For installations which use a Db2 database, Cognos requires 32 bit Db2 client libraries, which will be installed by the 64 bit Db2 client. However, there maybe further dependencies on other 32 bit packages being present on the system; if such errors are reported, you can check this with 'ldd $library_name'. 2. Required: If Network Manager and ITNCM-Reports are installed together, and if the Network Manager database is Oracle: a) To connect to an Oracle database from your TCR Installation, ensure that ITNCM-Reports have been installed, and then update the itncmEnv.sh file default location:

/opt/IBM/tivoli/netcool/ncm/reports/itncmEnv.sh

Export the following variables ( where is the Netcool Configuration Manager installation directory): ORACLE_HOME

ORACLE_HOME=/reports/oracle export ORACLE_HOME

74 IBM Netcool Operations Insight: Integration Guide TNS_ADMIN

TNS_ADMIN=/reports/oracle/network/admin export TNS_ADMIN

LD_LIBRARY_PATH

LD_LIBRARY_PATH=/reports/oracle:$LD_LIBRARY_PATH export LD_LIBRARY_PATH

b) Create a tnsnames.ora file located in /reports/oracle/network/ admin/ c) Add the NCIM database to the tnsnames.ora file. For example: NCIM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(Host = )(Port = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = NCIM) )) d) Add 'source /reports/itncmEnv.sh' to your /.bash_profile. Note: The .bash_profile is only used for bash shell, and it will be different for sh, csh or ksh. e) Restart your reporting server after this update. However, before restarting the Reporting Server, check that the amended login profile has been sourced.

Configuring integration with Tivoli Netcool/OMNIbus Ensure that you have Netcool/OMNIbus Knowledge Library (NcKL) Enhanced Probe Rules for Netcool Configuration Manager installed on your Tivoli Netcool/OMNIbus server.

Before you begin Deploy rules specific to Netcool Configuration Manager. These rules have been bundled with Netcool Configuration Manager and deployed on the Netcool Configuration Manager Presentation server during installation, and are located in the /nckl-rules directory. Note: This procedure is no longer required for device synchronization with Network Manager, and the mapping of devices between Netcool Configuration Manager and Network Manager. The standard Netcool/OMNIbus Knowledge Library configuration must have been applied to the ObjectServer and to the Probe for SNMP as part of the prerequisite tasks for the integration. The $NC_RULES_HOME environment variable must also have been set on the computer where the probe is installed. This environment variable is set to $NCHOME/etc/rules on UNIX or Linux. Tip: To source the Network Manager environment script, run the following script: ./opt/IBM/tivoli/netcool/env.sh where opt/IBM/tivoli/netcool is the default Network Manager directory. Note: If you have existing Probe for SNMP custom rules that you want to preserve, create backups as required before deploying the Netcool/OMNIbus Knowledge Library rules in step two.

About this task The location denoted by $NC_RULES_HOME holds a set of Netcool/OMNIbus Knowledge Library lookup files and rules files within a number of sub-directories. In particular, the $NC_RULES_HOME/include- snmptrap/ibm subdirectory contains files that can be applied to the Probe for SNMP. To support the integration, you must add customized rules for Netcool Configuration Manager to this subdirectory. Remember: If you have installed Netcool/OMNIbus Knowledge Library (NcKL) Enhanced Probe Rules Version 4.4 Multiplatform English (NcKL4.4) on your Tivoli Netcool/OMNIbus server, which is the recommended option, you do not need to install the ITNCM-specific Rules files, as documented here.

Procedure Installing rules files specific to Netcool Configuration Manager (not the recommended option) 1. From the server where you have installed Netcool Configuration Manager, copy the following files:

Chapter 3. Installing Netcool Operations Insight 75 • ncm_install_dir/nckl_rules/nckl_rules.zip where ncm_install_dir represents the installation location of Netcool Configuration Manager, for example /opt/IBM/tivoli/netcool/ncm Copy these files to a temporary location on the computer where the Probe for SNMP is installed. 2. Extract the contents of the nckl_rules.zip file, and then copy the extracted files to the $NC_RULES_HOME/include-snmptrap/ibm subdirectory. 3. If object server failover has already been configured, proceed to step 4. Otherwise, perform the following steps: a) Go to the folder in which the mttrapd.props has been placed, for example $NCHOME/omnibus/ probes/AIX5, where AIX5 is specific to your operating system. b) Edit the mttrapd.props file by commenting out the backup object server reference: #ServerBackup : '' 4. To ensure that the probe can reference the enhanced lookup and rules files, edit the $NC_RULES_HOME/snmptrap.rules file by uncommenting the following include statements, as shown:

include "$NC_RULES_HOME/include-snmptrap/ibm/ibm.master.include.lookup include "$NC_RULES_HOME/include-snmptrap/ibm/ibm.master.include.rules" include "$NC_RULES_HOME/include-snmptrap/ibm/ibm-preclass.include.snmptrap.rules"

5. Run the probe. If the probe was already running, force the probe to re-read the rules file so that the changes can take effect, for example: Locate the PID of the probe by running the following command on the server running the probe. Look for a process named - nco_p_mttrapd

ps -eaf | grep mttrapd kill -9 PID

Note: If the probe is installed on a different computer from Network Manager or the DASH portal, you must restart the probe manually.

Configuring integration with Network Manager Copy a number of jar files from the Network Manager GUI server into the Netcool Configuration Manager instance of WebSphere.

About this task Note: The following default locations may differ depending on where WebSphere was installed on your Network Manager and Netcool Configuration Manager servers.

Procedure Copy the following jars from the Network Manager GUI server into the corresponding folder in the Netcool Configuration Manager WebSphere instance. • /opt/IBM/WebSphere/AppServer/etc/vmm4ncos.jks • /opt/IBM/WebSphere/AppServer/lib/ext/com.ibm.tivoli.ncw.ncosvmm.jar • /opt/IBM/WebSphere/AppServer/lib/ext/jconn3.jar

Configuring device synchronization You configure device synchronization to enable Netcool Configuration Manager to use Network Manager for network device discovery.

Before you begin During Netcool Configuration Manager 6.4.2 installation you are asked if the product is to be integrated or not. If you select Yes the installer will ask the necessary questions to set up the configuration of device synchronization between Netcool Configuration Manager and Network Manager.

76 IBM Netcool Operations Insight: Integration Guide A default value of 24 hours (1440mins) is defined in the Netcool Configuration Manager rseries.properties file for the periodic synchronization with Network Manager. For the initial synchronization, a large number of devices may already have been discovered by Network Manager, and it can take a considerable time before they are imported into Netcool Configuration Manager. (This also applies in a situation where the discovery scope is widened so that a significant number of new devices are added to Network Manager.) Consequently the devices may not yet appear in the NMENTITYMAPPING table in the Netcool Configuration Manager database, and therefore the context tools (right-click tools) from Network Manager will not be available for those devices. Tip: You can reduce this time by editing the rseries.properties file, and changing the mapping period to 60 (for example). This will speed up the process by which devices are added to the autodiscovery queue on Netcool Configuration Manager, but will not change the actual time to import each device configuration. Tip: If the password for the itnmadmin user has changed on Network Manager, update the locally stored copy on Netcool Configuration Manager as follows: Use the icosadmin script located in /opt/IBM/tivoli/netcool/ncm/bin. For example:

icosadmin ChangeNmPassword -u itnmadmin -p

About this task The configuration is stored in the rseries.properties file located in the following directory: /config/properties/ Network Manager:

NMEntityMappingComponent/baseURL=https://nmguiservername:16311 NMEntityMappingComponent/uri=/ibm/console/nm_rest/topology/devices/domain/NCOMS NMEntityMappingComponent/uriParam= NMEntityMappingComponent/uriProps= #####Note: Complete URL = baseURL+uri+uriProps&uriParam

NMEntityMappingComponent/delay=10 ## delay on startup before first run NMEntityMappingComponent/importRealm=ITNCM/@DOMAINNAME NMEntityMappingComponent/maxDevicesPerRealm=50 NMEntityMappingComponent/ncmUser=administrator NMEntityMappingComponent/period=1440 ## Daily (in minutes) NMEntityMappingComponent/user=itnmadmin NMEntityMappingComponent/passwd=netcool ## Optional: Install stores securely

Note: You can edit this file and the component configuration properties after install if requirements change. Before device synchronization runs for the first time ensure that the Network Manager Rest API user (in our example 'itnmadmin') has the ncp_rest_api role in DASH. Device synchronization is now done by a new core component of Netcool Configuration Manager, and is therefore part of Netcool Configuration Manager Component configuration and started automatically when Netcool Configuration Manager starts. Component start up is configured in / config/server/config.xml

NMEntityMappingComponent com.intelliden.nmentitymapping.NMEntityMappingComponent

Note: The NMEntityMappingComponent is configured by default so if you wish to stop it being started on Netcool Configuration Manager startup you can comment it out in the config.xml file. Note: There is a limit of 50 imported devices per Realm in Netcool Configuration Manager. If there are more devices than this in a Network Manager domain, they will be added to sub-realms (labeled 001, 002, etc) in Netcool Configuration Manager.

Chapter 3. Installing Netcool Operations Insight 77 Example Troubleshooting NM Component Verify that the component has started in file:

/logs/Server.out Fri Jul 31 13:30:06 GMT+00:00 2015 - Starting component : NMEntityMappingComponent Fri Jul 31 13:30:06 GMT+00:00 2015 - All components started

Verify that the config.xml file has the component specified for startup Verify that the NMEntityMapping table has the new columns required for the new component implementation:

"NMENTITYMAPPING" ( "UNIQUEKEY" BIGINT NOT NULL, "ENTITYID" BIGINT NOT NULL DEFAULT 0, "RESOURCEBROWSERID" BIGINT NOT NULL DEFAULT 0, "DOMAINNAME" VARCHAR(64), "JPAVERSION" BIGINT NOT NULL DEFAULT 1, "ENTITYNAME" VARCHAR(255), "ACCESSIPADDRESS" VARCHAR(64), "SERIALNUMBER" VARCHAR(64), "VENDORTYPE" VARCHAR(64), "MODELNAME" VARCHAR(64), "OSVERSION" VARCHAR(64), "OSIMAGE" VARCHAR(255), "OSTYPE" VARCHAR(64), "HARDWAREVERSION" VARCHAR(64) )

Ensure that the Network Manager Rest API user has the ncp_rest_api role in DASH.

Configuring the Alerts menu of the Active Event List You must add access to the Activity Viewer from the Active Event List by configuring the Alerts menu.

Procedure 1. From the navigation pane, click Administration > Event Management Tools > Menu Configuration. 2. From the Available menus list on the right, select alerts and click Modify. 3. From the Menus Editor window, select from the drop-down list under Available items, and then click Add selected item to add the item to the Current items list. The item is added as the last item. 4. Under Available items, select menu from the drop-down list. The list of all menu items that can be added to the Alerts menu is shown. 5. Select the Configuration Management item and click Add selected item. The item is added below the item in the Current items list. 6. Click Save and then click OK.

Results The Configuration Management submenu and tools are now available in the Alerts menu of the Active Event List, for use with Netcool Configuration Manager events. Note: Reports Menu options will not be displayed if the selected event is not enriched.

What to do next You can optionally create a global filter to restrict the events displayed in the Active Event List to Netcool Configuration Manager events only. You can add this filter to the Web GUI either by using the WAAPI client or by using the Filter Builder. When creating the filter, specify a meaningful name (for example, ITNCMEvents) and define the filter condition by specifying the following SQL WHERE clause:

where Class = 87724

78 IBM Netcool Operations Insight: Integration Guide Migrating reports If you have custom Reporting Services reports in an existing Netcool Configuration Manager installation, and are integrating with Network Manager, which has its own Reporting Services solution, you migrate your custom reports from the stand-alone to the integrated version of Reporting Services.

Before you begin If you are installing Network Manager on the same server as your existing Netcool Configuration Manager installation, you must export your custom reports before installing Network Manager.

About this task The report migration procedure is different for single and multiple server integrations. If you are installing Netcool Configuration Manager and Network Manager on the same server 1. Export the custom reports from the existing Netcool Configuration Manager version of Reporting Services and copy them to a safe location. Note: You export your custom reports before installing Network Manager to prevent the existing reports from being overwritten. 2. Disable and uninstall the existing Netcool Configuration Manager version of Reporting Services. 3. Install Network Manager and integrate it with the existing version of Netcool Configuration Manager as documented. 4. Import the custom reports into the Network Manager version of Reporting Services. If you are installing Netcool Configuration Manager and Network Manager on different servers 1. Install Network Manager and integrate it with the existing version of Netcool Configuration Manager as documented. 2. Export the custom reports from the existing Netcool Configuration Manager version of Reporting Services and copy them to the Network Manager server. 3. Import the custom reports into the Network Manager version of Reporting Services. 4. Disable the existing Netcool Configuration Manager version of Reporting Services.

Exporting custom reports (distributed integration architecture) After you have installed Network Manager on a server other than your existing Netcool Configuration Manager installation and performed all integration tasks, you export your custom Reporting Services reports. You also disable and uninstall the existing Netcool Configuration Manager version of Reporting Services.

Before you begin You export reports after installing Network Manager when all of the following circumstances apply to your scenario: • You are already running Reporting Services as part of an existing, non-integrated Netcool Configuration Manager installation. • You are deploying a distributed integration architecture and have already installed Network Manager on a server other than your existing version of Netcool Configuration Manager. • You have customized Netcool Configuration Manager reports that need to be migrated into your planned integrated solution.

About this task When you install the Network Manager version of Reporting Services on a server other than your existing version of Netcool Configuration Manager, the previous reports as well as the previous version of Reporting Services remain on the Netcool Configuration Manager server. To migrate such reports into an integrated solution, you perform the following tasks:

Chapter 3. Installing Netcool Operations Insight 79 If you are installing Netcool Configuration Manager and Network Manager on different servers 1. Install Network Manager and integrate it with the existing version of Netcool Configuration Manager as documented. 2. Export the custom reports from the existing Netcool Configuration Manager version of Reporting Services and copy them to the Network Manager server. 3. Import the custom reports into the Network Manager version of Reporting Services. 4. Disable the existing Netcool Configuration Manager version of Reporting Services. Remember: You do not have to migrate the standard Netcool Configuration Manager reports, because these will be installed together with the Network Manager version of Reporting Services (in addition to a number of Network View reports). You only migrate reports you have customized since installing the standard reports, or new reports you have created.

Procedure 1. Log into the Netcool Configuration Manager version of Reporting Services using the following URL: http://hostname:16310/ibm/console where hostname is the name of your Netcool Configuration Manager server and 16310 is the default port number for Reporting Services. 2. Click Reporting > Common Reporting. 3. Click Launch on the toolbar, and then select Administration from the drop-down menu. 4. Select the Configuration tab, then click Content Administration. 5. Click New Export to launch the New Export wizard. 6. Enter a name and description for the report export, then click Next. 7. Accept the default deployment method and click Next. 8. Click the Add link and select the ITNCM Reports checkbox, then move ITNCM Reports to the Selected Entries list. 9. Click OK, then Next > Next > Next, accepting the default values. 10. Select New archive, then Next > Next, accepting the default values.. 11. Click Finish > Run > OK. The reports are exported and the new export archive is displayed. 12. Navigate to the following directory: /opt/IBM/tivoli/netcool/ncm/tipv2Components/TCRComponent/cognos/deployment, where you can view the report archive, for example: -rw-r--r-- 1 icosuser staff 262637 23 Feb 10:27 ncm_export.zip where ncm_export.zip is the report archive. 13. Copy the file to the following directory on the Network Manager server: $TIP_HOME/../TCRComponent/cognos/deployment

Results You have exported the custom reports and copied them to the Network Manager server.

What to do next Next, you import the archived reports into the Network Manager version of Reporting Services, and then disable the Netcool Configuration Manager version of Reporting Services.

Importing reports (distributed integration architecture) After exporting the custom reports and copying them to the Network Manager server, you import the archived reports into the Network Manager version of Reporting Services, and then disable the Netcool Configuration Manager version of Reporting Services.

Before you begin You must have exported the custom reports and copied them to the Network Manager server.

80 IBM Netcool Operations Insight: Integration Guide About this task

Procedure 1. Log into the Network Manager Dashboard Application Services Hub. 2. Click Reporting > Common Reporting. 3. Click Launch on the toolbar, and then select Administration from the drop-down menu. 4. Select the Configuration tab, then click Content Administration. 5. Click New Import to launch the New Import wizard. A list of available report archives will be displayed. 6. Select the archive that you exported earlier and click Next. 7. Select ITNCM Reports, then Next and Next again, accepting the default values. 8. Click Finish > Run > OK. The reports are imported and the new archive is displayed in the list of archives. 9. Close the Common Reporting tab and click the Common Reporting link in the navigation pane. The custom reports will now be available in the Netcool Configuration Manager version of Reporting Services. 10. Navigate to the following directory: /opt/IBM/tivoli/netcool/ncm/bin/utils/support 11. Run the setPlatform.sh script:bash-3.2$ ./setPlatform.sh and disable Reporting, then exit. When the Netcool Configuration Manager server is restarted, Reporting Services will no longer be running.

Results You have now completed the migration of custom reports in your distributed custom environment.

Importing reports (single server integration architecture) After exporting the custom reports, disabling and uninstalling the Netcool Configuration Manager version of Reporting Services, and completing all other integration steps, you import the report archive into the Network Manager version of Reporting Services,

Before you begin You must have exported the custom reports before installing Network Manager on the same server as your existing Netcool Configuration Manager installation.

About this task

Procedure 1. Log into the Network Manager Dashboard Application Services Hub. 2. Click Reporting > Common Reporting. 3. Click Launch on the toolbar, and then select Administration from the drop-down menu. 4. Select the Configuration tab, then click Content Administration. 5. Click New Import to launch the New Import wizard. A list of available report archives will be displayed. 6. Select the archive that you exported earlier and click Next. 7. Select ITNCM Reports, then Next and Next again, accepting the default values. 8. Click Finish > Run > OK. The reports are imported and the new archive is displayed in the list of archives. 9. Close the Common Reporting tab and click the Common Reporting link in the navigation pane.

Chapter 3. Installing Netcool Operations Insight 81 Results The custom reports will now be available in the Netcool Configuration Manager version of Reporting Services.

Configuring Single Sign-On for Netcool Configuration Manager The single sign-on (SSO) capability in Tivoli® products means that you can log on to one Tivoli application and then launch to other Tivoli web-based or web-enabled applications without having to re-enter your user credentials. The repository for the user IDs is the Tivoli Netcool/OMNIbus ObjectServer. A user logs on to one of the participating applications, at which time their credentials are authenticated at a central repository. With the credentials authenticated to a central location, the user can then launch from one application to another to view related data or perform actions. Single sign-on can be achieved between applications deployed to DASH servers on multiple machines. Single sign-on capabilities require that the participating products use Lightweight Third Party Authentication (LTPA) as the authentication mechanism. When SSO is enabled, a cookie is created containing the LTPA token and inserted into the HTTP response. When the user accesses other Web resources in any other application server process in the same Domain Name Service (DNS) domain, the cookie is sent with the request. The LTPA token is then extracted from the cookie and validated. If the request is between different cells of application servers, you must share the LTPA keys and the user registry between the cells for SSO to work. The realm names on each system in the SSO domain are case sensitive and must match exactly. See Managing LTPA keys from multiple WebSphere® Application Server cells on the WebSphere Application Server Information Center. When configuring ITNCM-Reports for an integrated installation, ensure you configure single sign-on (SSO) on the Reporting Services server. Specifically, you must configure SSO between the instance of WebSphere that is hosting the Network Manager GUI, and the instance of WebSphere that is hosting ITNCM Reports. This will prevent unwanted login prompts when launching reports from within Network Manager. For more information, see the related topic links.

Creating user groups for DASH To configure single sign-on (SSO) between DASH and Netcool Configuration Manager, you must create Netcool Configuration Manager groups and roles for DASH.

Before you begin Note: For SSO between DASH and Netcool Configuration Manager to work, the user groups specified in this procedure must exist in both DASH and Netcool Configuration Manager. Network Manager and Netcool Configuration Manager users in DASH should use the same authentication type, for example, ObjectServer. Note: The IntellidenUser role needs to be assigned to the IntellidenUser group. Similarly, the IntellidenAdminUser role needs to be given to the IntellidenAdminUser group.

About this task

Procedure 1. Log onto the WebSphere Administrative console of the Network Manager GUI server as the profile owner (for example, smadmin). 2. Create a group by selecting Users and Groups > Manage Groups > Create. 3. Enter IntellidenUser in the Group name field. 4. Click Create, then click Create Like. 5. Enter IntellidenAdminUser in the Group name field. IntellidenAdminUser is required for access to Account Management in Netcool Configuration Manager. 6. Click Create, then click Close.

82 IBM Netcool Operations Insight: Integration Guide 7. Log off from the WebSphere Administrative console, then log on to the DASH GUI. 8. Select Console Settings > Roles > IntellidenUser. 9. Click Users and Groups > Add Groups > Search, then select the IntellidenUser group, and then click Add. 10. Select Console Settings > Roles > IntellidenAdminUser. 11. Click Users and Groups > Add Groups > Search, then select the IntellidenAdminUser group, and then click Add.

What to do next After creating Netcool Configuration Manager groups and roles for DASH, you create Netcool Configuration Manager users for DASH.

Creating users for DASH This section explains how to create the Netcool Configuration Manager Intelliden super-user as well as the default users: administrator, operator, and observer for DASH.

Before you begin For single sign-on (SSO) between DASH and Netcool Configuration Manager to work, a user must exist (that is, have an account) in both DASH and Netcool Configuration Manager. At install time Netcool Configuration Manager automatically creates four users: Intelliden, administrator, operator, and observer. Of these users, only the Intelliden user must be created in DASH. However, it is advisable that the other users are also created. Note: Only the username must match, it is not necessary that the passwords also match. After single-sign on configuration is complete, the user password entered in DASH will be used to authenticate a Netcool Configuration Manager login.

About this task This task describes how to create the previously listed Netcool Configuration Manager users for DASH.

Procedure 1. Log onto the WebSphere console of the Network Manager GUI server as the profile owner (foe example, smadmin). 2. Click Users and Groups > Manage Users, then click Create. 3. Enter Intelliden in the User ID, First name, and Last Name fields. 4. Enter the Intelliden user's password in the Password and Confirm Password fields. 5. Click on Group Membership and select Search. 6. Highlight the IntellidenAdminUser and IntellidenUser groups in the matching groups list, and click Add, then click Close. 7. Click Create, then click Create Like. 8. Enter administrator in the following fields: • User ID • First name • Last Name • Password • Confirm password 9. Click on Group Membership and select Search. 10. Highlight the IntellidenAdminUser and IntellidenUser groups in the matching groups list, and click Add, then click Close. 11. Click Create and then Close.

Chapter 3. Installing Netcool Operations Insight 83 12. Click Create. 13. Enter operator in the following fields: • User ID • First name • Last Name • Password • Confirm password 14. Click on Group Membership and select Search. 15. Highlight the IntellidenUser group in the matching groups list, and click Add and then Close. 16. Click Create, then click Create Like. 17. Enter observer in the following fields: • User ID • First name • Last Name • Password • Confirm password 18. Click on Group Membership and select Search. 19. Highlight the IntellidenUser group in the matching groups list, and click Add, then click Close. 20. Click Create and then Close.

What to do next After you have created the Netcool Configuration Manager users for DASH, you export the LTPA keystore to the Netcool Configuration Manager server.

Exporting the DASH LTPA keystore For added security the contents of the LTPA token are encrypted and decrypted using a keystore (referred to in the subsequent procedure as the LTPA keystore) maintained by WebSphere. In order for two instances of WebSphere to share authentication information via LTPA tokens they must both use the same LTPA keystore. The IBM Admin Console makes this a simple process of exporting the LTPA keystore on one instance of WebSphere and importing it into another.

About this task This task describes how to export the LTPA keystore from the instance of WebSphere running on the Network Manager DASH server to the instance of WebSphere running on the Netcool Configuration Manager server for keystore synchronization.

Procedure 1. Launch the DASH Admin Console. For example: http:// www.nm_gui_server_ip.com:16310/ibm/console. 2. Navigate to Settings > WebSphere Administrative Console. 3. Click Security > Global security. 4. Under the Authentication mechanisms and expiration tab, click LTPA. 5. Under the Cross-cell single sign-on tab, enter a password in the Password and Confirm password fields. The password will subsequently be used to import the LTPA keystore on the Netcool Configuration Manager server. 6. Enter the directory and filename you want the LTPA keystore to be exported to in the Fully qualified key file name field. 7. Complete by clicking Export keys.

84 IBM Netcool Operations Insight: Integration Guide 8. Transfer the LTPA keystore to the Netcool Configuration Manager server.

Results You will receive a message indicating that the LTPA keystore has been exported successfully.

What to do next You now configure the SSO attributes for DASH. Related tasks Importing the DASH LTPA keystore to the Netcool Configuration Manager server For added security the contents of the LTPA token are encrypted and decrypted using a keystore maintained by WebSphere. In order for two instances of WebSphere to share authentication information via LTPA tokens they must both use the same keystore. The IBM admin console makes this a simple process of exporting the keystore on one instance of WebSphere and importing it into another.

Configuring Single Sign-On for Netcool Configuration Manager Configuring SSO is a prerequisite to integrating products that are deployed on multiple servers. All DASH server instances must point to the central user registry.

About this task Use these instructions to configure single sign-on attributes for the DASH.

Procedure 1. Launch the DASH Admin Console. For example: http:// www.nm_gui_server_ip.com:16310/ibm/console . 2. Navigate to Settings > WebSphere Administrative Console. 3. Select Security, then click Global Security > Web and SIP Security > Single sign on (SSO). 4. In the Authentication area, expand Web security, then click Global Security > Web and SIP Security (on the Authentication area) > Single sign on (SSO). 5. Select the Enabled option if SSO is disabled. 6. Deselect Requires SSL. 7. Enter the fully-qualified domain names in the Domain name field where SSO is effective. If the domain name is not fully qualified, the DASH server does not set a domain name value for the LTPAToken cookie and SSO is valid only for the server that created the cookie. For SSO to work across Tivoli® applications, their application servers must be installed in the same domain (use the same domain name). See below for an example. 8. Deselect the Interoperability Mode option. 9. Deselect the Web inbound security attribute propagation option. 10. Click OK, then save your changes. 11. Stop and restart all the DASH server instances. Log out of the WebSphere Administrative Console.

Example If DASH is installed on server1.ibm.com and Netcool Configuration Manager is installed on server2.ibm.com, then enter a value of .ibm.com.

What to do next You enable SSO on Netcool Configuration Manager next. Related tasks Importing the DASH LTPA keystore to the Netcool Configuration Manager server For added security the contents of the LTPA token are encrypted and decrypted using a keystore maintained by WebSphere. In order for two instances of WebSphere to share authentication information via LTPA tokens they must both use the same keystore. The IBM admin console makes this a simple process of exporting the keystore on one instance of WebSphere and importing it into another.

Chapter 3. Installing Netcool Operations Insight 85 Enabling SSO for Netcool Configuration Manager Both Netcool Configuration Manager and Netcool Configuration Manager WebSphere must be configured to enable SSO.

About this task This task describes how to enable SSO for Netcool Configuration Manager if it was not enabled during installation.

Procedure 1. Navigate to $NCM_installation_dir/utils . 2. Run the configSSO.sh script, for example:

cd /opt/IBM/tivoli/netcool/ncm/bin/utils ./configSSO.sh enable

What to do next When SSO is enabled, the interface to Netcool Configuration Manager must accept an LTPA token as a means of authentication. This is achieved by importing the LTPA keystore to the Netcool Configuration Manager server.

Importing the DASH LTPA keystore to the Netcool Configuration Manager server For added security the contents of the LTPA token are encrypted and decrypted using a keystore maintained by WebSphere. In order for two instances of WebSphere to share authentication information via LTPA tokens they must both use the same keystore. The IBM admin console makes this a simple process of exporting the keystore on one instance of WebSphere and importing it into another.

Before you begin You must have exported the LTPA keystore from the instance of WebSphere running on the Network Manager DASH server and copied it to the Netcool Configuration Manager server in a previous task.

About this task In this procedure you will import that LTPA keystore to the instance of WebSphere running on the Netcool Configuration Manager server.

Procedure 1. Logon to the WebSphere Administrative Console for the Netcool Configuration Manager Presentation Server using the superuser name and password specified at install time (typically Intelliden). For example: http://NCM_presentation_server:16316/ibm/console 2. Click Security > Global security. 3. Under Authentication mechanisms and expiration, click LTPA. 4. Under Cross-cell single sign-on, enter the password in the Password and Confirm password fields. This password is the one that was used when the LTPA keystore was exported from DASH. 5. Enter the LTPA keystore file name in the Fully qualified key file name field. This is the LTPA keystore that was exported from DASH. 6. Click Import keys. 7. Click Save directly to the master configuration.

What to do next You should now configure single sign-on attributes for the WebSphere instance running on the Netcool Configuration Manager server. Related tasks Exporting the DASH LTPA keystore

86 IBM Netcool Operations Insight: Integration Guide For added security the contents of the LTPA token are encrypted and decrypted using a keystore (referred to in the subsequent procedure as the LTPA keystore) maintained by WebSphere. In order for two instances of WebSphere to share authentication information via LTPA tokens they must both use the same LTPA keystore. The IBM Admin Console makes this a simple process of exporting the LTPA keystore on one instance of WebSphere and importing it into another. Configuring single sign-on attributes for DASH Configuring SSO is a prerequisite to integrating products that are deployed on multiple servers. All DASH server instances must point to the central user registry.

Configuring single sign-on attributes for Netcool Configuration Manager WebSphere Configuring SSO is a prerequisite to integrating products that are deployed on multiple servers. All eWAS server instances must point to the central user registry.

About this task This procedure is performed on the Netcool Configuration Manager eWAS instance running on the Netcool Configuration Manager server.

Procedure 1. Logon to the WebSphere Administrative Console for the Netcool Configuration Manager Presentation Server using the superuser name and password specified at install time (typically Intelliden). For example http://NCM_presentation_server:16316/ibm/console 2. In the Authentication area, expand Web security then click Single sign-on. 3. Select the Enabled option if SSO is disabled. 4. Deselect Requires SSL. 5. Leave the domain name blank in the Domain name field. 6. Deselect the Interoperability Mode option. 7. Deselect the Web inbound security attribute propagation option. 8. Click Apply to save your changes. 9. Click Save Directly to the Master Configuration.

What to do next You create a federated user repository for Netcool Configuration Manager next.

Creating and configuring a federated user repository for Netcool Configuration Manager The first step for authenticating by using a Tivoli Netcool/OMNIbus ObjectServer is to create a federated user repository for Netcool Configuration Manager.

Before you begin Important: Before attempting this procedure, complete the following task: “Configuring integration with Network Manager” on page 76

About this task A federated user repository is built on Virtual Member Manager (VMM), which provides the ability to map entries from multiple individual user repositories into a single virtual repository. The federated user repository consists of a single named realm, which is a set of independent user repositories. Each user repository may be an entire external user repository. This task describes how to create and configure a federated user repository for Netcool Configuration Manager.

Chapter 3. Installing Netcool Operations Insight 87 Procedure 1. Launch the WebSphere Administrative Console from http://:<16316>/ibm/console and login using the Netcool Configuration Manager superuser name and password specified during installation. Note: The port number may be different for a non-standard installation. 2. Select Security > Global security. 3. Under the User account repository, select Federated repositories from the Available realm definitions field, and click Configure. 4. Under Repositories in the realm, select Add repositories (LDAP, custom, etc). 5. Under General Properties, select New Repository > Custom Repository 6. Update the ObjectServer VMM properties as described here (or per your custom repository): Repository identifier NetcoolObjectServer Repository adapter class name com.ibm.tivoli.tip.vmm4ncos.ObjectServerAdapter Custom Properties Add the following four properties: Note: Find the exact details from the repository viewable on the Network Manager Gui Administrative Console.

Table 20. Custom Properties Name (case-sensitive Value username ObjectServer administrator user name password ObjectServer encrypted administrator user password port1 Object Server port number host1 Object Server hostname/IP address 7. Click Apply and save your changes directly to the master configuration. 8. Under General properties of Repository Reference, update the Unique distinguished name to o=netcoolObjectServerRepository 9. Click OK and save your changes directly to the master configuration, then click OK again. 10. The local repository may not contain IDs that are also in Netcool Configuration Manager. To mitigate, perform one of the following steps: • Remove the local file repository from the federation of repositories. • Remove all the conflicting users from the local file repository. 11. If prompted, enter the WebSphere Administrator user password in the Password and Confirm Password fields, and click OK. 12. In Global security under the User account repository, select Federated Repositories from the Available realm definitions field, and click Set as current. 13. Click Apply and save your changes directly to the master configuration. 14. Log out of the Administrative Console. 15. Stop the Netcool Configuration Manager server using the ./itncm.sh stop command. Then start the Netcool Configuration Manager server using the ./itncm.sh start command.

What to do next Netcool Configuration Manager will now authenticate with the ObjectServer VMM.

88 IBM Netcool Operations Insight: Integration Guide The Netcool Configuration Manager Superuser has been reverted to the user created during the Dash profile Installation (which is smadmin by default)

Installing the Network Manager Insight Pack This topic explains how to install the Network Manager Insight Pack into the Operations Analytics - Log Analysis product and make the necessary configurations. The Network Manager Insight Pack is required only if you deploy the Networks for Operations Insight feature and want to use the topology search capability. For more information, see “Network Manager Insight Pack” on page 334. Operations Analytics - Log Analysis can be running while you install the Insight Pack.

Before you begin You already completed some of these prerequisites when you installed the Tivoli Netcool/OMNIbus Insight Pack. See “Installing the Tivoli Netcool/OMNIbus Insight Pack” on page 60 for more details. • Install the Operations Analytics - Log Analysis product. For upgrades, migrate the data from previous instances of the product. • Ensure that the Tivoli Netcool/OMNIbus Insight Pack is installed before a data source is created. For more information, see “Netcool/OMNIbus Insight Pack” on page 311. • Download the Network Manager Insight Pack from IBM Passport Advantage. The Insight Pack image is contained within the Operations Analytics - Log Analysis download, see information about Event Search integration and Topology Search integration in http://www-01.ibm.com/support/docview.wss? uid=swg24043698. The file name of the Insight Pack is NetworkManagerInsightPack_V1.3.0.0.zip. • Install Python 2.6 or later with the simplejson library, which is required by the custom apps that are included in the Insight Pack. • Over large network topologies, the topology search can be performance intensive. It is therefore important to determine which parts of your network you want to use the topology search on. You can define those parts of the network into a single domain. Alternatively, implement the cross-domain discovery function in Network Manager IP Edition to create a single aggregation domain of the domains that you want to search. You can restrict the scope of the topology search to that domain or aggregation domain. To do so, set the ncp.dla.ncim.domain property to the name of the domain. If you still anticipate a detrimental impact on performance, you can also set the ncp.spf.multipath.maxLinks property. This property sets a threshold on the number of links that are processed when the paths between the two end points are retrieved. If the threshold number is breached, only the first identified route between the two end points is retrieved. Make these settings in step “Installing the Network Manager Insight Pack” on page 89 of this task. For more information about deploying Network Manager IP Edition to monitor networks of small, medium, and larger networks, see https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/install/concept/ ovr_deploymentseg.html . For more information about the cross-domain discovery function, see https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/disco/task/ dsc_configuringcrossdomaindiscoveries.html . • Obtain the details of the NCIM database that is used to store the network topology for the Network Manager IP Edition product.

Procedure 1. Copy the NetworkManagerInsightPack_V1.3.0.0.zip installation package to $UNITY_HOME/ unity_content. Tip: For better housekeeping, create a new $UNITY_HOME/unity_content/NetworkManager directory and copy the installation package there. 2. Use the $UNITY_HOME/utilities/pkg_mgmt.sh command to install the Insight Pack. For example, to install into $UNITY_HOME/unity_content/NetworkManager, run the command as follows:

$UNITY_HOME/utilities/pkg_mgmt.sh -install $UNITY_HOME/unity_content/ /NetworkManagerNetworkManagerInsightPack_V1.3.0.0.zip

Chapter 3. Installing Netcool Operations Insight 89 3. In $UNITY_HOME/AppFramework/Apps/NetworkManagerInsightPack_V1.3.0.0/ Network_Topology_Search/NM_EndToEndSearch.properties, specify the details of the NCIM database. Tip: You can obtain most of the information that is required from the $NCHOME/etc/precision/ DbLogins.cfg or DbLogins.DOMAIN.cfg files (where DOMAIN is the name of the domain). ncp.dla.ncim.domain Limits the scope of the topology search capability to a single domain in your topology. For multiple domains, implement the cross-domain discovery function in Network Manager IP Edition and specify the name of the aggregation domain. For all domains in the topology, comment out this property. Do not leave it blank. ncp.spf.multipath.maxLinks Sets a limit on the number of links that are processed when the paths between the two end points are retrieved. If the number of links exceeds the limit, only the first identified path is returned. For example, you specify ncp.spf.multipath.maxLinks = 1000. If 999 links are processed, all paths between the two end points are retrieved. If 1001 links are processed, one path is calculated and then processing stops. ncp.dla.datasource.type The type of database used to store the Network Manager IP Edition topology. Possible values are db2 or oracle. ncp.dla.datasource.driver The database driver. For Db2, type com.ibm.db2.jcc.Db2Driver. For Oracle, type oracle.jdbc.driver.OracleDriver. ncp.dla.datasource.url The database URL. For Db2, the URL is as follows:

jdbc:db2://host:port/name

For Oracle the URL is as follows:

jdbc:oracle:thin:@host:port:name

In each case, host is the database host name, port is the port number, and name is the database name, for example, NCIM. ncp.dla.datasource.schema Type the NCIM database schema name. The default is ncim. ncp.dla.datasource.ncpgui.schema Type the NCPGUI database schema name. The default is ncpgui. ncp.dla.datasource.username Type the database user name. ncp.dla.datasource.password. Type the database password. ncp.dla.datasource.encrypted If the password is encrypted, type true. If not, type false. ncp.dla.datasource.keyfile Type the name of and path to the cryptographic key file, for example $UNITY_HOME/wlp/usr/ servers/Unity/keystore/unity.ks. ncp.dla.datasource.loginTimeout Change the number of seconds until the login times out, if required. Optionally change the logging information, which is specified by the java.util.logging.* properties.

90 IBM Netcool Operations Insight: Integration Guide Results • The NetworkManagerInsightPack_V1.3.0.0 Insight Pack is installed into the directory that you selected in step 2. • The Rule Set, Source Type, and Collection are in place. You can view these resources in the Administrative Settings page of Operations Analytics - Log Analysis.

What to do next • Use the pkg_mgmt command to verify the installation of the Insight Pack. See Verifying the Network Manager Insight Pack. • If you are using an Oracle database, make the extra configurations that are required to support Oracle. See “Configuring topology search apps for use with Oracle databases” on page 339. Configure the products to support the topology search capability. See “Configuring Topology Search” on page 332. Related concepts Data Source creation in Operations Analytics - Log Analysis V1.3.5 Data Source creation in Operations Analytics - Log Analysis V1.3.3 Related tasks Installing the Tivoli Netcool/OMNIbus Insight Pack This topic explains how to install the Netcool/OMNIbus Insight Pack into the Operations Analytics - Log Analysis product. Operations Analytics - Log Analysis can be running while you install the Insight Pack. This Insight Pack ingests event data into Operations Analytics - Log Analysis and installs custom apps. Installing the Network Manager Insight Pack This topic explains how to install the Network Manager Insight Pack into the Operations Analytics - Log Analysis product and make the necessary configurations. The Network Manager Insight Pack is required only if you deploy the Networks for Operations Insight feature and want to use the topology search capability. For more information, see “Network Manager Insight Pack” on page 334. Operations Analytics - Log Analysis can be running while you install the Insight Pack. Configuring topology search Before you can use the topology search capability, configure the Tivoli Netcool/OMNIbus core and Web GUI components, the Gateway for Message Bus and Network Manager IP Edition. Related information Gateway for Message Bus documentation

Installing Performance Management To add the Performance Management solution extension, install Network Performance Insight and then integrate it with Netcool Operations Insight. Related tasks Installing the Device Dashboard Follow these instructions to install the Device Dashboard.

Installing Network Performance Insight Install Network Performance Insight by performing the steps in the Installation section of the Network Performance Insight documentation.

About this task In particular, you must ensure that you perform the following tasks as part of the installation of Network Performance Insight: • Enable Network Performance Insight integration with Jazz for Service Management by running the npiDashIntegration script. • Add the relevant signer certificate to your browser to enable single sign-on. • Configure the Netcool/OMNIbus Standard Input probe to enable performance anomaly events to be displayed in the Netcool/OMNIbus Event Viewer, and verify that the probe starts automatically once

Chapter 3. Installing Netcool Operations Insight 91 installation of Network Performance Insight is complete and the Network Performance Insight Event Service has started. Navigate to the Network Performance Insight documentation at https://www.ibm.com/support/ knowledgecenter/SSCVHB_1.3.1/npi_kc_welcome.html for more information. Related reference Ports used by products and components Use this information to understand which ports are used by the different products and components that make up the Netcool Operations Insight solution.

Enabling the integration with Network Performance Insight Enable the integration by creating a kafka.properties file and populating it with relevant properties.

Before you begin The Network Performance Insight Kafka server must be available and running in order to be able to enable the integration.

Procedure 1. Copy the kafka.properties file from its default location $NCHOME/precision/storm/conf/ default/ to the following location:

$NCHOME/precision/storm/conf/

2. Edit the kafka.properties file as follows: a) Set the kafka producer properties under the kafka.producer property according to the information at the following link from the kafka documentation: http://kafka.apache.org/ documentation.html#producerconfigs . b) Set the kafka consumer properties under the kafka.consumer property according to the information at the following link from the kafka documentation: http://kafka.apache.org/ documentation.html#newconsumerconfigs . Note: The only mandatory properties are the following: • kafka.consumer.bootstrap.servers • kafka.producer.bootstrap.servers 3. (Optional) If you anticipate the need to enable and disable the integration often then you can add the kafka.enabled property to facilitate this. To do this, add the kafka.enabled property to one of the following properties files, and set this property to the value true. • $NCHOME/precision/storm/conf/NMStormTopology.properties • $NCHOME/precision/storm/conf/kafka.properties If the property is not present in either file, then this means that kafka.enabled=true. 4. Restart Apache Storm, by running the following commands:

itnm_stop storm

itnm_start storm

Results To test the output of the integration use the ncp_storm_validate.sh script. Related information Kafka producer properties Kafka consumer properties

92 IBM Netcool Operations Insight: Integration Guide Network Manager V4.2 documentation: Starting and stopping ncp_stormClick here to access information on starting and stopping Network Managerprocesses, including the ncp_storm process, in the Network Manager documentation. Network Manager V4.2 documentation: ncp_storm_validate.shClick here to access information on the ncp_storm_validate.sh script in the Network Manager documentation.

Configuring Network Performance Insight Integrate Network Performance Insight with Netcool Operations Insight by performing the steps in the Configuring section of the Network Performance Insight documentation.

About this task Navigate to the Network Performance Insight documentation at https://www.ibm.com/support/ knowledgecenter/SSCVHB_1.3.1/npi_kc_welcome.html for more information.

Installing the Device Dashboard Install the Device Dashboard to view event and performance data for a selected device and its interfaces on a single dashboard. Related concepts Device Dashboard Use the Device Dashboard to troubleshoot network issues by navigating the network topology and seeing performance anomalies and trends on any device, link, or interface.

About the Device Dashboard Read this information before installing the Device Dashboard. The content of the Device Dashboard varies depending on which Netcool Operations Insight solution extensions you have installed. As a minimum you must have the base Netcool Operations Insight solution installed together with the Networks for Operations Insight solution extension in order to be able to install the Device Dashboard. If you have this combination installed, then the Device Dashboard displays the following content: • Network Hop View portlet, enabling network navigation to a selected device. • Event Viewer portlet, displaying events for a selected device. • Top Performers portlet, displaying the top Network Manager polling data for a selected device or interface. If you have the base Netcool Operations Insight solution installed together with both the Networks for Operations Insight and Performance Management for Operations Insight solution extensions, then, in addition to the Network Hop View and the Event Viewer portlets, the Device Dashboard also displays the following content: • Performance Insights portlet, displaying Network Performance Insight performance anomaly data for a selected device and its interfaces. • Performance Timeline portlet displaying Network Performance Insight performance timeline data for a selected device and its interfaces. • Capability to launch from a selected interface in the Performance Insights portlet to the Network Performance Insight Traffic Details dashboard to see flow data for that interface Related tasks Upgrading the Device Dashboard

Installing the Device Dashboard Follow these instructions to install the Device Dashboard.

Before you begin Before you install the Device Dashboard, the following prerequisites must be met:

Chapter 3. Installing Netcool Operations Insight 93 • If your Netcool Operations Insight system includes the Networks for Operations Insight solution extension only, then Network Manager and the Network Health Dashboard must be installed and configured. • If your Netcool Operations Insight system includes the Networks for Operations Insight and the Performance Management for Operations Insight solution extensions, then the following prerequisites must also be met: – Network Performance Insight must be installed and configured. – Network Manager must be integrated with Network Performance Insight. – All -installation Network Performance Insight configuration tasks must be complete. Note: For more information about the Networks for Operations Insight and the Performance Management for Operations Insight solution extensions, see the Solution overview.

Procedure On the host where you installed the Network Manager GUI components, start Installation Manager and install the following package: Netcool Operations Insight Widgets version -> Netcool Operations Insight Device Dashboard. Where version is the VRMF version number for the current version of the Device Dashboard; for example, 1.1.0.2. For information on all current version numbers, see the following web page: https:// www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Netcool %20OMNIbus/page/Release%20details This installs the Device Dashboard, which you can use to troubleshoot network issues by navigating the network topology and viewing performance anomalies and trends on any device or interface. Note: If you installed Network Performance Insight and integrated this as part of your Netcool Operations Insight solution, then at the start of this process, Installation Manager asks for Ambari credentials. At this point you must specify your Ambari credentials. Ambari is configured during the Network Performance Insight installation process and you would have configured Ambari credentials at that time. Default credentials are as follows: • userid: admin • password: admin Related tasks Installing Performance Management To add the Performance Management solution extension, install Network Performance Insight and then integrate it with Netcool Operations Insight.

Configuring the Device Dashboard Following installation of the Device Dashboard, you must perform these post-installation tasks.

Procedure 1. The Device Dashboard installation process automatically creates the noi_npi and noi_npi_admin roles, which allow users to work with the dashboard. Assign these Device Dashboard roles to relevant users. Log in as the Dashboard Application Services Hub administrator to assign the roles noi_npi and noi_npi_admin as follows: a) Go to Console settings > Roles. b) In Users and Groups, assign roles noi_npi and noi_npi_admin to the npiadmin user, and assign the noi_npi role to the npiuser user. The noi_npi role provides access to view the Device Dashboard, while the noi_npi_admin role provides edit access to the Device Dashboard. Note: You can assign the roles to other individual users or add the role to a group to control access.

94 IBM Netcool Operations Insight: Integration Guide Warning: If you do not assign the roles and the user selects Show Device Dashboard in the right-click tools, the GUI will hang and the user will receive an encoding error message when logging out, requiring the restart of the browser. c) Click Save. If Network Performance Insight is integrated this as part of your Netcool Operations Insight solution, then you must also complete the following configuration tasks. 2. On the same host save the Network Performance Insight profile by performing the following steps: a) Go to Console Settings > Console Integrations. b) Select NPI. c) Review the information, and click Save.

The Network Performance Insight icon appears on the left in the Navigation bar. 3. Configure access to traffic flow data by performing the following steps: a) Log into the Network Manager GUI server and navigate to the following file:

$NMGUI_HOME/profile/etc/tnm/tnm.properties

Where $NMGUI_HOME location where the Network Manager GUI components are installed. By default, this location is /opt/IBM/netcool/gui/precision_gui. b) Open the tnm.properties file for editing. c) Add the following property:

tnm.npi.host.name=https://NPI_Server_Name:9443

Where NPI_Server_Name is the hostname of the Network Performance Insight server. d) Save the tnm.properties file. 4. Specify the Network Performance Insight version by performing the following steps: a) Log into the Network Manager GUI server and navigate to the following file:

$NMGUI_HOME/profile/etc/tnm/npi.properties

Where $NMGUI_HOME location where the Network Manager GUI components are installed. By default, this location is /opt/IBM/netcool/gui/precision_gui. b) Open the npi.properties file for editing. c) Add the following property:

npi.server.version=1.3.1

d) Save the npi.properties file. e) Restart the Dashboard Application Services Hub server to enable these properties to take effect. Related information Network Manager V4.2 documentation: User rolesClick here to access information on user roles in the Network Manager documentation. Tivoli Netcool/OMNIbus V8.1 documentation: Restarting the DASH serverClick here to access information how to restart the Dashboard Application Services Hub server.

Installing on-premises Agile Service Manager Install the latest version of Agile Service Manager.

Procedure 1. To install the Agile Service Manager Core services, and Observers, proceed as follows:

Chapter 3. Installing Netcool Operations Insight 95 a) On the server where Agile Service Manager Core services are installed (in our example, server 6), extract the Agile Service Manager Base and Observer eAssembly archives. b) Follow the instructions in the Agile Service Manager documentation to complete the installation. 2. To install the Agile Service Manager UI, proceed as follows: a) On the server where Dashboard Application Services Hub is installed (in our example, server 3), extract the Agile Service Manager Base eAssembly archive. b) Start Installation Manager and configure it to point to the following repository files: repository.config file for Agile Service Manager c) See the Agile Service Manager documentation for more information on how to perform the installation, and perform post-installation configuration tasks, such as configuring the Gateway for Message Bus to support the Agile Service Manager Event Observer. Related information Netcool Agile Service Manager Knowledge Center

Post-installation tasks Perform the following post-installation tasks of Netcool Operations Insight on-premises deployment.

Integrating with components on Red Hat OpenShift You can integrate your on-premises IBM Netcool Operations Insight installation with deployments of Agile Service Manager and event management on Red Hat OpenShift.

Integrating with Agile Service Manager on Red Hat OpenShift Learn how to integrate your on-premises IBM Netcool Operations Insight installation with a deployment of Agile Service Manager on Red Hat OpenShift.

Before you begin You must have on-premises IBM Netcool Operations Insight successfully installed.

Procedure 1. Install Agile Service Manager on Red Hat OpenShift. For more information, see the Agile Service Manager Knowledge Center https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/ Installing/t_asm_ocp_installing.html . 2. Install the Agile Service Manager UI on your on-premises DASH server. For more information, see the Agile Service Manager Knowledge Center https://www.ibm.com/support/knowledgecenter/ SS9LQB_1.1.8/Installing/t_asm_installingui_viainstaller.html . 3. Add Agile Service Manager roles in your on-premises DASH server. For more information, see the Agile Service Manager Knowledge Center https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/ Installing/t_asm_configuring.html . 4. Configure on-premises ASM UI with ASM core on OCP using https://www.ibm.com/support/ knowledgecenter/SS9LQB_1.1.8/Installing/c_asm_hybrid_installing.html from the Agile Service Manager Knowledge Center.

Integrating with event management on Red Hat OpenShift Learn how to integrate your on-premises IBM Netcool Operations Insight installation with a deployment of event management on Red Hat OpenShift.

Before you begin On-premises IBM Netcool Operations Insight must be successfully installed.

About this task Follow this procedure to install event management on Red Hat OpenShift, and enable it to forward events to your on-premises Netcool Operations Insight installation's ObjectServer: https://www.ibm.com/

96 IBM Netcool Operations Insight: Integration Guide support/knowledgecenter/SSTPTP_1.6.0/com.ibm.netcool_ops.doc/soc/integration/concept/ cem_overview.html

Configuring Single Sign-On Single Sign-On (SSO) can be configured to support the launch of tools between the products and components of Netcool Operations Insight. Different SSO handshakes are supported; which handshake to configure for which capability is described here. Each handshake must be configured separately.

Procedure Set up the SSO handshake as described in the following table. The table lists which products and components are connected by SSO, which capabilities require which SSO handshake and additional useful information. See the related tasks at the end of the page for links to more information.

Table 21. SSO handshakes for Netcool Operations Insight Handshake is configured to SSO handshake can be configured between support this these products or components capability Additional notes Operations Analytics - Dashboard Application Event search Supports the launch of right- Log Analysis Services Hub click tools from the event lists3 of the Netcool/OMNIbus Web GUI to the custom apps of the Tivoli Netcool/OMNIbus Insight Pack.

Operations Analytics - Dashboard Application Topology search Supports the launch of right- Log Analysis Services Hub click tools from the Web GUI event lists to the custom apps of the Network Manager Insight Pack. Supports the launch of right- click tools from the Network Views in the Network Manager product for the custom apps in the Network Manager Insight Pack.

Netcool Configuration Dashboard Application Networks for Supports the launch of right- Manager Services Hub Operations click tools from the Network Insight Views to the Netcool Configuration Manager GUIs.

Related tasks Configuring single sign-on for the event search capability Configure single sign-on (SSO) between Web GUI and Operations Analytics - Log Analysis so that users can switch between the two products without having to log in each time. Configuring single sign-on for the topology search capability Configuring SSO between Operations Analytics - Log Analysis V1.3.5 and Dashboard Application Services Hub Configuring SSO between Operations Analytics - Log Analysis V1.3.3 and Dashboard Application Services Hub

3 That is, the Event Viewer and Active Event List.

Chapter 3. Installing Netcool Operations Insight 97 Configuring Single Sign-On for Netcool Configuration Manager Configuring SSO is a prerequisite to integrating products that are deployed on multiple servers. All DASH server instances must point to the central user registry. Related information Configuring Jazz for Service Management for SSO

Uninstalling on premises Use this information to learn how to uninstall on-premises Netcool Operations Insight. To uninstall the components of Netcool Operations Insight, refer to the uninstallation guides for each of its components. Be sure to follow any dependency matrices and prerequisites that are given because the order of uninstallation is significant as some components depend on others.

Component Reference Netcool/ https://www.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ OMNIbus com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/task/ core omn_ins_im_removing_omn.html Netcool/ https://www.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ OMNIbus com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_ins_removing.html WebGUI Netcool/ https://www.ibm.com/support/knowledgecenter/SSSHYH_7.1.0.19/ Impact: com.ibm.netcoolimpact.doc/admin/imag_install_uninstalling_t.html Db2 https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/ com.ibm.db2.luw.qb.server.doc/doc/c0059726.html Operations https://www.ibm.com/support/knowledgecenter/en/SSPFMY_1.3.6/com.ibm.scala.doc/ Analysis install/iwa_uninstall_oview_c.html IBM Tivoli https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/install/task/ Network ins_uninstallingandmaintainingtheproduct.html Manager ITNCM https://www.ibm.com/support/knowledgecenter/SS7UH9_6.4.2/ncm/wip/install/task/ ncm_ins_uninstallingncm.html Network https://www.ibm.com/support/knowledgecenter/SSCVHB_1.3.1/install/ Performance tnpi_uninstalling.html Insights ASM UI https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/Installing/ t_asm_uninstallinguiim.html ASM Core https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/Installing/ services t_asm_uninstallingcore.html JazzSM https://www.ibm.com/support/knowledgecenter/SSEKCU_1.1.3.0/com.ibm.psc.doc/ Fixpacks install/psc_t_rollback_fp3_oview.html JazzSM and https://www.ibm.com/support/knowledgecenter/SSEKCU_1.1.3.0/com.ibm.psc.doc/ associated install/psc_c_install_uninstall_oview.html software Installation https://www.ibm.com/support/knowledgecenter/en/SSDV2W_1.8.5/ Manager com.ibm.silentinstall12.doc/topics/r_uninstall_cmd.html

98 IBM Netcool Operations Insight: Integration Guide Uninstalling Event Analytics You can uninstall Event Analytics with the IBM Installation Manager GUI or console, or do a silent uninstall.

For more information about installing and using IBM Installation Manager, see the following IBM information center: https://www.ibm.com/support/knowledgecenter/SSDV2W_1.8.5/com.ibm.cic.agent.ui.doc/ helpindex_imic.html

Uninstalling Event Analytics Use IBM Installation Manager to remove Event Analytics .

Before you begin Take the following actions: • Stop all Event Analytics processes. • Back up any data or configuration files that you want to retain. • To do a silent removal, create or record an Installation Manager response file. Use the -record response_file option. To create a response file without installing the product, use the -skipInstall option. For example: 1. Create or record a skipInstall: IBMIM.exe -record C:\response_files\install_1.xml -skipInstall C:\Temp \skipInstall 2. To create an uninstall response file, using the created skipInstall: IBMIM.exe -record C:\response_files\uninstall_1.xml -skipInstall C:\Temp \skipInstall

About this task Note: To uninstall Tivoli Netcool/OMNIbus Web GUI 8.1.0.19, you must first uninstall the Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19, including the Event Analytics feature.

Procedure GUI removal 1. To remove Event Analytics with the Installation Manager GUI: a) Change to the /eclipse subdirectory of the Installation Manager installation directory. b) Use the following command to start the Installation Manager wizard: ./IBMIM c) In the main Installation Manager window, click Uninstall. d) Select the offerings that you want to remove and follow the Installation Manager wizard instructions to complete the removal. Console removal 2. To remove Event Analytics with the Installation Manager console: a) Change to the /eclipse/tools subdirectory of the Installation Manager installation directory. b) Use the following command to start the Installation Manager: ./imcl -c c) From the Main Menu, select Uninstall.

Chapter 3. Installing Netcool Operations Insight 99 d) Select the offerings that you want to remove and follow the Installation Manager instructions to complete the removal. Silent removal 3. To silently remove Event Analytics: a) Change to the /eclipse/tools subdirectory of the Installation Manager installation directory. b) Use the following command to start the Installation Manager: ./imcl -input response_file -silent -log /tmp/install_log.xml - acceptLicense Where response_file is the directory path to the response file that defines the removal configuration

Results Installation Manager removes the files and directories that it installed.

What to do next Files that Installation Manager did not install, and configuration files that were changed, are left in place. Review these files and remove them or back them up as appropriate.

Installing on Red Hat OpenShift only Follow these instructions to prepare for and install IBM Netcool Operations Insight on OpenShift. Click here to download the Netcool Operations Insight on OpenShift Installation Guide. When you install Netcool Operations Insight on OpenShift, all of its components are automatically deployed, as pods running within the cluster, and are orchestrated by Kubernetes.

100 IBM Netcool Operations Insight: Integration Guide Worker nodes Worker or compute nodes run all applications

IBM Netcool Operations Insight nodes

(Optional) IBM Cloud Platform Common Services nodes

Master nodes The control plane

Bootstrap node Only needed during Red Hat OpenShift deployment

Figure 8. Architecture diagram

For more information about the architecture of your deployment, see “Products and components on Red Hat OpenShift” on page 4.

Chapter 3. Installing Netcool Operations Insight 101 Preparing Before you can install Netcool Operations Insight on Red Hat OpenShift you must set up your cluster.

Sizing for a Netcool Operations Insight on Red Hat OpenShift deployment Learn about the sizing requirements for a full Netcool Operations Insight on OpenShift deployment.

Hardware sizing for a full deployment on Red Hat OpenShift

Table 22. Minimum recommendations for a multi-node cluster full deployment Category Resource Demo/PoC/Trial Production General sizing information Event Management sizing information Event Rate Throughput Steady state events per 20 50 second Burst rate events per 100 500 second Topology Management sizing information System size Approx. resources 200,000 1,000,000 Event Rate Throughput Steady state events per 10 50 second Burst rate events per 10 200 second Environment options Container size Trial Production High availability No Yes

Table 23. This table shows the total hardware requirements for a Netcool Operations Insight (including Event Management & Topology Management) system deployed on a Red Hat OpenShift cluster, including both the Netcool Operations Insight and Red Hat OpenShift related hardware needs. This table is useful for customers that need to create a new Red Hat OpenShift cluster to deploy the full Red Hat OpenShift and Netcool Operations Insight stack. Category Resource Demo/PoC/Trial Production Total requirements Netcool Operations Insight including Red Hat OpenShift and infrastructure nodes Recommended x86 CPUs 62 128 CPU/MEM/DISK (Recommended) Memory (GB) 136 344 (Recommended) Disk (GB) 1500 4000 (Recommended) Minimum CPU/MEM/ x86 CPUs (Min) 54 126 DISK Memory (GB) (Min) 120 336 Disk (GB) (Min) 1400 3500 Persistent storage 851 2796 requirements (Gi)

102 IBM Netcool Operations Insight: Integration Guide Table 23. This table shows the total hardware requirements for a Netcool Operations Insight (including Event Management & Topology Management) system deployed on a Red Hat OpenShift cluster, including both the Netcool Operations Insight and Red Hat OpenShift related hardware needs. This table is useful for customers that need to create a new Red Hat OpenShift cluster to deploy the full Red Hat OpenShift and Netcool Operations Insight stack. (continued) Category Resource Demo/PoC/Trial Production Total disk IOPS 1760 8550 requirements

Table 24. This table shows the hardware requirements attributed to the Netcool Operations Insight footprint deployed on Red Hat OpenShift. This table is useful for customers who already have an Red Hat OpenShift cluster installed but need to add worker/compute nodes to it to accommodateNetcool Operations Insight. Category Resource Demo/PoC/Trial Production Total hardware requirements Netcool Operations Insight services only Recommended x86 CPUs 56 112 CPU/MEM/DISK (Recommended) Memory (GB) 112 280 (Recommended) Disk (GB) 700 2100 (Recommended) Minimum CPU/MEM/ x86 CPUs (Min) 48 112 DISK Memory (GB) (Min) 96 280 Disk (GB) (Min) 600 2100 Persistent storage 851 2796 requirements (Gi) Total disk IOPS 1760 8550 requirements

Table 25. This table illustrates the recommended resource allocation for the Red Hat OpenShift Infrastructure, Master and Worker nodes, along with the recommended configuration for the disk volumes associated with each persisted storage resource. Category Resource Demo/PoC/Trial Production Hardware allocation and configuration

OpenShift Minimum nodes count 1 1 infrastructure node Recommended nodes 1 2 CPU cores, disk and count memory requirements x86 CPUs 2 2 Memory (GB) 8 8 Disk (GB) 500 500

Chapter 3. Installing Netcool Operations Insight 103 Table 25. This table illustrates the recommended resource allocation for the Red Hat OpenShift Infrastructure, Master and Worker nodes, along with the recommended configuration for the disk volumes associated with each persisted storage resource. (continued) Category Resource Demo/PoC/Trial Production

OpenShift control Minimum nodes count 1 3 plane (master) node(s) Recommended nodes 1 3 count CPU cores, disk and x86 CPUs 4 4 memory requirements Memory (GB) 16 16 Disk (GB) 300 300 Netcool Operations Insight components - suggested configuration

OpenShift compute Minimum nodes count 6 7 (worker) nodes Recommended node 7 7 count CPU cores, disk and x86 CPUs 8 16 memory requirements Memory (GB) 16 40 Disk (GB) 100 300 Persistent storage NCI 5 10 minimum requirements (Gi) LDAP 1 1 DB2® 5 5 NCO 10 20 Cassandra 200 1500 Kafka 150 300 Zookeeper 15 30 Elasticsearch 450 900 File-observer 5 10 CouchDB 5 15 ImpactGUI 5 5

104 IBM Netcool Operations Insight: Integration Guide Table 25. This table illustrates the recommended resource allocation for the Red Hat OpenShift Infrastructure, Master and Worker nodes, along with the recommended configuration for the disk volumes associated with each persisted storage resource. (continued) Category Resource Demo/PoC/Trial Production Persistent storage NCI 100 200 minimum IOPS requirements (Gi) LDAP 50 100 DB2 50 100 NCO 100 200 Cassandra 600 2400 Kafka 300 2400 Zookeeper 50 150 Elasticsearch 300 2400 File-observer 50 100 CouchDB 60 300 ImpactGUI 100 200

Storage Create storage prior to your installation. Note: If you want to deploy IBM Netcool Operations Insight on Red Hat OpenShift on a cloud platform, such as RedHat OpenShift Kubernetes Service (ROKS), assess your storage requirements. Netcool Operations Insight on container platforms supports three storage options: local storage, Rook Ceph storage and vSphere storage. Support for Rook Ceph storage was added in V1.6.0.2 for Netcool Operations Insight on OpenShift.

Configure storage Security Context Constraint (SCC) Before configuring storage you need to determine and declare your storage SCC for a chart running in a non-root environment across a number of storage solutions. For more information about how to secure your storage environment, see the OpenShift documentation: https://docs.openshift.com/container- platform/4.4/authentication/managing-security-context-constraints.html The table shows the information about persistent volumes along with the expected size and access mode for a full deployment. For more information about storage access mode, see the Cloud Pak documentation: https://playbook.cloudpaklab.ibm.com/cm/storage-accessmodes

Name Trials Production Trial size Recommend Access User fsGroup per ed mode replica production size per replica cassandra 1 3 50Gi 150Gi ReadWrit 1001 2001 eOnce cassandra-bak 1 3 50Gi 100Gi ReadWrit 1001 2001 eOnce cassandra- 1 3 50Gi 150Gi ReadWrit 1001 2001 topology eOnce

Chapter 3. Installing Netcool Operations Insight 105 Name Trials Production Trial size Recommend Access User fsGroup per ed mode replica production size per replica cassandra- 1 3 50Gi 100Gi ReadWrit 1001 2001 topology-bak eOnce kafka 3 6 50Gi 50Gi ReadWrit 1001 2001 eOnce zookeeper 1 3 5Gi 10Gi ReadWrit 1001 2001 eOnce couchdb 1 3 5Gi 5Gi ReadWrit 1001 2001 eOnce db2 1 1 5Gi 5Gi ReadWrit 1001 2001 eOnce impact 1 1 5Gi 5Gi ReadWrit 1001 2001 eOnce impactgui 1 1 5Gi 5Gi ReadWrit 1001 2001 eOnce ncobackup 1 1 5Gi 5Gi ReadWrit 1001 2001 eOnce ncoprimary 1 1 5Gi 5Gi ReadWrit 1001 2001 eOnce openldap 1 1 1Gi 1Gi ReadWrit 1001 2001 eOnce elasticsearch 1 3 75Gi 75Gi ReadWrit 1001 2001 eOnce elastic search- 1 3 75Gi 75Gi ReadWrit 1000 1000 topology eOnce fileobserver 1 1 5Gi 5Gi ReadWrit 1001 2001 eOnce

Configuring Rook Ceph storage More information can be found on the Rook Ceph container storage website, https://rook.io/ . The following list is a high-level summary of the tasks to be completed: • Attach a disk of an appropriate size to each compute node on your virtualization console. • Clone the Rook Storage Orchestrator for Kubernetes Git repository and prepare the relevant .yaml files for Rook storage operator and cluster deployment. • Create the Rook Ceph Storage Cluster. • Create the RADOS Block Device (RBD) storage class. Optional tasks are: • Access the Rook Ceph Dashboard. • Use the Rook Ceph toolbox for cluster diagnostics and health checks. • Deploy a test application to use the RBD Storage Class.

106 IBM Netcool Operations Insight: Integration Guide For the step by step Rook Caph storage configuration procedure, click this link from the IT Operations Management Developer Center: https://../..//docs/configuring-red-hat-ceph-storage/.

Configuring persistent volumes with local storage You can use local storage for trial, demo, or development systems. In addition, it is a supported production configuration for topology management. For trial or development systems, you can download the createStorageAllNodes.sh script from the IT Operations Management Developer Center http://ibm.biz/local_storage_script . The script facilitates the creation of persistent storage volumes (PVs) using local storage. The PVs are mapped volumes, which are mapped to directories off the root file system on the parent node. The script also generates example SSH scripts that create the directories on the local file system of the node. The SSH scripts create directories on the local hard disk associated with the virtual machine and are only suitable for proof of concept or development work. Local storage is not recommended for production environments, as the loss of a worker node may result in the loss of locally stored data. You could overcome some of these limitations with a file system with the required performance, scale, and redundancy requirements. For production environments that wish to use local-storage you will need to use the Red Hat OpenShift recommended method in this link: • https://docs.openshift.com/container-platform/4.4/storage/persistent_storage/persistent-storage- local.html Note: If local storage is used, the noi-cassandra-* and noi-cassandra-bak-* PVs must be on the same local node. Cassandra pods will fail to bind to their PVCs if this requirement is not met.

Configuring vSphere storage If you want to use vSphere storage, consult the Red Hat OpenShift documentation: https:// docs.openshift.com/container-platform/4.2/storage/persistent-storage/persistent-storage-vsphere.html

Preparing your cluster Prepare your cluster for deployment.

Pod Deployment and High Availability (HA) The deployment of pods across worker nodes is managed by Red Hat OpenShift. Pods for a service are deployed on nodes that meet the services's specification and affinity rules. Kubernetes automatically restarts pods if they fail.

Cluster Preparation Follow the steps in the table to prepare your cluster.

Table 26. Preparing your cluster Item Action More information

1 Provision the required virtual machines. The hardware architecture on which Netcool Operations Insight is installed must be AMD64. Kubernetes can have a mixture of worker nodes with different architectures, like AMD64, s390x (Linux on System z®), and ARM8.

Chapter 3. Installing Netcool Operations Insight 107 Table 26. Preparing your cluster (continued) Item Action More information

2 Download and install Red Hat For Red Hat OpenShift OpenShift. documentation, see https:// access.redhat.com/ documentation/en-us/ openshift_container_platform/4. 3/ For Red Hat OpenShift videos, see: https://www.youtube.com/ user/rhopenshift/videos

3 Optional: Install IBM Cloud For more information, see: Platform Common Services. • https://www.ibm.com/support/ Install of kubectl is optional. knowledgecenter/en/SSHKN6/ Red Hat OpenShift oc command kc_welcome_cs.html can be used in place of kubectl. • https://docs.openshift.com/ container-platform/4.4/ cli_reference/openshift_cli/ usage-oc-kubectl.html • Kubernetes CLI commands: Kubernetes documentation: Overview

4 Ensure that you have access to an administrator account on the target Red Hat OpenShift cluster. Netcool Operations Insight must be installed by a user with administrative access on the cluster.

5 Create a custom namespace to deploy into.

oc create namespace namespace

where namespace is the name of the custom namespace that you want to create. Optional: If you want multiple independent installations of Netcool Operations Insight within the cluster, then create multiple namespaces within your cluster. Run each installation in a separate namespace. Note: Additional disk space and worker nodes are required to support multiple installations.

108 IBM Netcool Operations Insight: Integration Guide Table 26. Preparing your cluster (continued) Item Action More information 6 Ensure you have access to the For more information, see Netcool Operations Insight for “V1.6.1 Product and component Red Hat OpenShift installation version matrix” on page 9 package. 7 If you load the PPA package, skip this step. Otherwise, obtain the entitlement key that is assigned to your ID. 1. Log in to https:// myibm.ibm.com/products- services/containerlibrary with the IBMid and password that are associated with the entitled software. 2. Select Copy key to copy the entitlement key to the clipboard, in the Entitlement keys section. 3. Log in to the entitled registry by running the command:

docker login cp.icr.io -- username -- password

Chapter 3. Installing Netcool Operations Insight 109 Table 26. Preparing your cluster (continued) Item Action More information 8 If you load the PPA package, skip this step. Otherwise, you can create a Docker registry secret to enable your deployment to pull the Netcool Operations Insight archive from your local Docker registry into your cluster's namespace.

oc create secret docker- registry noi-registry- secret --docker- username=docker_user --docker- password=docker_pass --docker-server=docker- reg.reg_namespace.svc:5000/ namespace

Where: • noi-registry-secret is the name of the secret that you are creating. Suggested value is noi-registry-secret • docker_user is the Docker user, for example kubeadmin • docker_pass is the Docker password • docker-reg is your Docker registry • reg_namespace is the namespace that your Docker registry is in. • namespace is the name of the namespace that you want to deploy to. Note: Warning: creating a secret manually this way results in a temporary password that will expire, which can lead to operational issues. For more information, see “Pods cannot restart if secret expires” on page 663. Make a note of the name of the secret that you specified for noi- registry-secret as you will need to specify it later during installation.

110 IBM Netcool Operations Insight: Integration Guide Table 26. Preparing your cluster (continued) Item Action More information 9 In some Red Hat OpenShift For more information, click environments an additional https://docs.openshift.com/ configuration is required to allow container-platform/4.4/ external traffic to reach the networking/configuring- routes. This is due to the required networkpolicy.html and read addition of network policies to section 3 secure pod communication traffic. For example, if you are attempting to access a route which returns a "503 Application Not Available" error, a network policy may be blocking the traffic. Ensure that your Red Hat OpenShift environment is updated to allow network policies to function correctly. To do that, check if the ingresscontroller is configured with the endpointPublishingStrateg y: HostNetwork value by running the command:

oc get ingresscontroller default -n openshift- ingress-operator -o yaml

When endpointPublishingStrateg y.type is set to HostNetwork, the network policy will not work against routes unless the default namespace contains the selector label. To allow traffic, add a label to the default namespace by running the command:

oc patch namespace default --type= -p '[{"op":"add","path":"/ metadata/labels","value": {"network.openshift.io/ policy-group":"ingress"}}]'

10 Create ClusterRole and Click https://www.ibm.com/ ClusterRoleBinding for topology support/knowledgecenter/ management observers. SS9LQB_1.1.8/LoadingData/ t_asm_loadingviakubernetes.htm l to learn how to perform this task.

Chapter 3. Installing Netcool Operations Insight 111 Table 26. Preparing your cluster (continued) Item Action More information 11 Run this command to ensure the system health job is not showing any errors:

NAMESPACE=netcool SERVICE_ACCOUNT=noi-service- account cat <

112 IBM Netcool Operations Insight: Integration Guide Table 26. Preparing your cluster (continued) Item Action More information 12 The operator requires a Security For more information about Context Constraints (SCC) to be storage, see “Storage” on page bound to the target service 105 account prior to installation. All pods will use this SCC. An SCC constrains the actions a pod can perform. Actions such as Host IPC access. Host IPC access is required by our Db2 pod. You can choose either: • Predefined SCC privileged. • Custom SCC. An example of Custom SCC definition is:

apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: ibm-noi-scc allowHostDirVolumePlugin: false allowHostIPC: true allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false defaultAddCapabilities: [] allowedCapabilities: - SETPCAP - AUDIT_WRITE - CHOWN - NET_RAW - DAC_OVERRIDE - FOWNER - FSETID - KILL - SETUID - SETGID - NET_BIND_SERVICE - SYS_CHROOT - SETFCAP - IPC_OWNER - IPC_LOCK - SYS_NICE - DAC_OVERRIDE priority: 0 fsGroup: type: RunAsAny readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret

Chapter 3. Installing Netcool Operations Insight 113 Table 26. Preparing your cluster (continued) Item Action More information 13 Before your installation, create Service Account and add SCC to it by running the following command:

oc create serviceaccount noi-service-account -n

oc adm policy add-scc-to- user privileged system:serviceaccount::noi-service-account

oc patch serviceaccount default -p '{"imagePullSecrets": [{"name": "noi-registry- secret"}]}'

Where namespace is the namespace for the project that you are deploying to.

Configuring authentication Configure passwords and secrets. You can either manually create the passwords and secrets that are required by IBM Netcool Operations Insight, or the required passwords and secrets can be generated for you by the installer. Passwords are stored in secrets.

Procedure 1. The following user passwords and secrets are required.

Users requiring password Corresponding secret Data key(s) in secret smadmin custom_resource-was-secret WAS_PASSWORD impactadmin custom_resource-impact-secret IMPACT_ADMIN_PASSWORD icpadmin custom_resource-icpadmin- ICP_ADMIN_PASSWORD secret OMNIbus root custom_resource-omni-secret OMNIBUS_ROOT_PASSWORD LDAP admin custom_resource-ldap-secret LDAP_BIND_PASSWORD couchdb custom_resource-couchdb- password username=root secret secret=couchdb internal user custom_resource-ibm-hdm- session common-ui-session-secret internal user custom_resource-systemauth- password username=system secret hdm custom_resource-cassandra- username password auth-secret redis custom_resource-ibm-redis- username password authsecret

114 IBM Netcool Operations Insight: Integration Guide Users requiring password Corresponding secret Data key(s) in secret kafka custom_resource-kafka-admin- username password secret admin custom_resource-kafka-client- username password secret

Create these passwords and secrets manually, or leave the installer to create the passwords and secrets automatically and then retrieve the passwords post-install. 2. Automatic creation of passwords and secrets. The Netcool Operations Insight installer uses existing passwords and secrets. If any of the required passwords and secrets do not exist, then the installer automatically creates random passwords for the required passwords and then creates the required secrets from these passwords. For automatic creation of passwords and secrets, use the following procedure. a) Proceed with the installation, using “Installing Netcool Operations Insight on Red Hat OpenShift” on page 118. If you set the LDAP mode to proxy, then you must manually configure the passwords and secrets for LDAP admin and impactadmin. For information on how to do this, refer to the Manual creation of passwords and secrets step in “Configuring authentication” on page 114. b) Proceed with the installation, using “Installing Netcool Operations Insight on Red Hat OpenShift” on page 118. c) After installation has successfully completed, you can extract the passwords from the secrets. See “Retrieving passwords from secrets” on page 131 3. Manual creation of passwords and secrets All passwords must be less than 16 characters long and contain only alphanumeric characters. To create all the required passwords and secrets manually, use the following procedure. a) Create passwords for the users requiring passwords in step “1” on page 114, if these do not already exist. b) Create custom_resource-icpadmin-secret with the following command:

kubectl create secret generic custom_resource-icpadmin-secret --from-literal=ICP_ADMIN_PASSWORD=password --namespace namespace

Where • password is the password for icpadmin. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. c) Create custom_resource-impactadmin-secret with the following command:

kubectl create secret generic custom_resource-impact-secret --from-literal=IMPACT_ADMIN_PASSWORD=password --namespace namespace

Where • password is the password for impactadmin. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. d) Create custom_resource-ldap-secret with the following command:

Chapter 3. Installing Netcool Operations Insight 115 kubectl create secret generic custom_resource-ldap-secret --from-literal=LDAP_BIND_PASSWORD=password --namespace namespace

Where • password is the password of your organization's LDAP server. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. e) Create custom_resource-omni-secret with the following command:

kubectl create secret generic custom_resource-omni-secret --from-literal=OMNIBUS_ROOT_PASSWORD=password --namespace namespace

Where • password is the root password to set for the Netcool/OMNIbus ObjectServer. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. f) Create custom_resource-was-secret (smadmin user) with the following command:

kubectl create secret generic custom_resource-was-secret --from-literal=WAS_PASSWORD=password --namespace namespace

Where • password is the password for OMNIbus admin user. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. g) Create custom_resource-couchdb-secret with the following command:

kubectl create secret generic custom_resource-couchdb-secret --from-literal=password=password --from-literal=secret=couchdb --from-literal=username=root --namespace namespace

Where • password is the password for the internal couch. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. h) Create secret for communication between pods with the following command:

kubectl create secret generic custom_resource-systemauth-secret --from-literal=password=password --from-literal=username=system --namespace namespace

Where • password is the password for communication between pods.

116 IBM Netcool Operations Insight: Integration Guide • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. i) Create secret for user interface communication between pods with the following command:

kubectl create secret generic custom_resource-ibm-hdm-common-ui-session-secret --from-literal=session=password --namespace namespace

Where • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. j) Create custom_resource-cassandra-auth-secret with the following command:

kubectl create secret generic custom_resource-cassandra-auth-secret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username default is hdm. Do not use cassandra. • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. k) Create custom_resource-ibm-redis-authsecret with the following command:

kubectl create secret generic custom_resource-ibm-redis-authsecret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username default is redis. • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. l) Create custom_resource-kafka-admin-secret with the following command:

kubectl create secret generic custom_resource-kafka-admin-secret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username default is kafka. • password is the password for user interface communication between pods.

Chapter 3. Installing Netcool Operations Insight 117 • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. m) Create custom_resource-kafka-client-secret with the following command:

kubectl create secret generic custom_resource-kafka-client-secret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username default is admin. • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your Netcool Operations Insight on OpenShift deployment in name (OLM UI install), or metadata.name in noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install Netcool Operations Insight. n) Proceed with the installation, using “Installing Netcool Operations Insight on Red Hat OpenShift” on page 118. If you wish to change a password after installation, see “Changing passwords and recreating secrets” on page 132

Installing Follow these instructions to install IBM Netcool Operations Insight on a private cloud with Red Hat OpenShift. Note: If you plan to install the optional service and topology management extension, then this is installed at the same time as Netcool Operations Insight is installed.

Installing Netcool Operations Insight on Red Hat OpenShift Follow these instructions to install Netcool Operations Insight on Red Hat OpenShift.

Before you begin Before you install, check that the following prerequisites are met: • Your cluster is prepared with the required tools, and a custom namespace. For more information, see “Preparing your cluster” on page 107. • Your cluster has the required storage. For more information, see “Storage” on page 105 • You have a docker registry secret. For more information, see “Preparing your cluster” on page 107. • You have created the required passwords and secrets, or it is your intention that they will be created by the installer. For more information, see “Configuring authentication” on page 114. To install Netcool Operations Insight you need to install Operators. These can be installed with the Operator Lifecycle Management or with the command line. To install through the Lifecycle Management, see “Installing operators with the Operator Lifecycle Management console” on page 119. To install by using the command line, see “Installing operators with the CLI using PPA” on page 120.

118 IBM Netcool Operations Insight: Integration Guide Installing operators with the Operator Lifecycle Management console Follow these instructions to install the Netcool Operations Insight operators with the Operator Lifecycle Management (OLM) console.

Before you begin The Netcool Operations Insight operator needs cluster scope permissions. It requires Role Based Access Control authorization at a cluster level since it deploys and modifies Custom Resource Definitions and cluster roles.

Procedure Create a Catalog source 1. Select the menu navigation Administration > Cluster Settings within the Red Hat OpenShift console (version 4.3 or higher). 2. Select OperatorHub configuration resource under the Global Configurations tab. 3. Click the Create Catalog Source button under the Sources tab. 4. Provide the catalog source name and the image URL, docker.io/ibmcom/noi-operator- catalog:1.0.0-20200620093846 and select the Create button. 5. The catalog source appears. Refresh the screen after a few minutes to ensure that the number of operators count became 1. Create the Netcool Operations Insight Operator 6. Select the menu navigation OperatorHub and search for the Netcool Operations Insight operator. Select the Install button. 7. Select a namespace to install the operator. Do not use namespaces that are owned by Kubernetes or OpenShift, such as kube-system or default. If you don't already have a project, create a project under the navigation menu Home > Projects. Note: If you use a OpenShift V4.3 environment and you create operators by selecting Approval Strategy > Manual, the operator is stuck in the Requires approval state. If you click Require Approval, you get a 404 error. 8. Select the Install button. 9. Under the navigation menu Operators > Installed Operators, view the Netcool Operations Insight operator. It takes a few minutes to install. Ensure that the status of the installed Netcool Operations Insight operator page is Succeeded. 10. Determine whether you want to install Netcool Operations Insight on the cloud or use a hybrid architecture. Select the Create Instance link under the correct custom resource. 11. Use the YAML or the form view to configure the properties for theNetcool Operations Insight deployment - if you are using Red Hat OpenShift V4.4.5 or earlier, then you cannot use the form view and you must use the CLI to directly edit the .yaml file. For more information about configurable properties for a cloud only deployment, see “Operator properties” on page 123. Note: Changing the instances in the form view is not supported. 12. Select the Create button. 13. Under the All Instances tab, a Netcool Operations Insight formation and Netcool Operations Insight Cloud or Hybrid instance appears. View the status of each of the updates on the installation. When the instances state shows OK, the deployment is ready. Note: Changing an existing deployment from a Trial deployment type to a Production deployment type is not supported. Note: The topology management capability is enabled by default. If you want to disable it, go to Operators > Installed Operators and click the Netcool Operations Insight operator. Then click Create Instance on the Cloud deployment or Hybrid deployment tab and Edit form on the upper right corner. Finally, go to the Topology section and click the toggle switch under Topology Enabled.

Chapter 3. Installing Netcool Operations Insight 119 Note: If you update custom secrets in the OLM console, crypto key is corrupted and the command to encrypt passwords does not work. Only update custom secrets with the CLI. For more information, see “Configuring authentication” on page 149 and “Installing operators with the CLI using PPA” on page 120. For more information about storing a certificate as a secret, see https://www.ibm.com/ support/knowledgecenter/SS9LQB_1.1.8/LoadingData/t_asm_obs_configuringsecurity.html

What to do next If you want to enable or disable a feature after your installation, you can edit the Netcool Operations Insight instance by running the command:

oc edit noi

Where is the name of the instance you want to edit. You can then select enable or disable the observer. Note: When you disable features at post install, the resource is not automatically deleted. To find out if the feature is deleted or not you need to check the operator log.

Installing operators with the CLI using PPA Follow these instructions to install operators with the command-line interface (CLI) using PPA.

Before you begin • To push the images to the image registry, the script install-noi-archive.sh requires one of the below: – Podman [Recommended] (https://podman.io/getting-started/) – Docker (https://www.docker.com/get-started) – Skopeo (https://github.com/containers/skopeo) • To validate the image archive, the script install-noi-archive.sh requires shasum (RHEL 7/8: yum install -y perl-Digest-SHA ) • To unpack images and push to image registry install-noi-archive.sh requires 150 Gi free storage. • The operator has cluster scope permissions. It requires role-based access control (RBAC) authorization at a cluster level because it deploys and modifies Custom Resource Definitions (CRDs) and cluster roles.

Procedure 1. Complete the following steps to verify that your IBM Passport Advantage software download is valid and has been signed by IBM: a) Untar or unzip the download so that you have five files: • *.tar • *-public-key • *-public-key-ocsp • *-public-key-cops-intermediate • *.sig file b) Ensure that you have OpenSSL installed and then issue the following command using the signature and public key files:

openssl dgst -sha256 -verify -signature

Example command:

openssl dgst -sha256 -verify noi-public-key -signature test.tar.sig test.tar

120 IBM Netcool Operations Insight: Integration Guide c) If the file has been signed by IBM, the openssl command returns the verified OK message on the command line. 2. Run the following command to verify that the certificate used to sign your Passport Advantage download is valid and verify its ownership by IBM:

openssl x509 -inform pem -in -noout -subject -issuer -startdate - enddate

Example command:

openssl x509 -inform pem -in noi-public-key-ocsp -noout -subject -issuer -startdate -enddate

This command displays the certificate issuer, the owner, as well as the certificate validity dates. 3. Run the following command to verify with the Digicert Certificate Authority whether the certificate is still valid:

openssl ocsp -no_nonce -issuer -cert -VAfile -text -url http://ocsp.digicert.com-respout ocsptest

Example command:

openssl ocsp -no_nonce -issuer noi-public-key-ocsp-intermediate -cert noi-public_key_ocsp - VAfile noi-public_key_ocsp_intermediate -text -url http://ocsp.digicert.com

This command connects to the Digicert Certificate Authority and verifies whether the certificate used to create the keys is still valid and in good standing. 4. Extract the downloaded file into a local directory.

tar -xvf ibm-netcool-prod-1.6.1.tar

5. Run the installation script from within the archive directory:

./install-noi-archive.sh --help This script installs images required by Netcool Operations Insight v1.6.1 Usage: ./install-noi-archive.sh [-n, --namespace] : namespace [-u, --user] : user [-p, --password] : password [-d, --use-docker] : use docker command [-o, --use-podman] : use podman command [-s, --use-skopeo] : use skopeo command [-r, --registry] : image registry [-a, --validate-archive] : validate archive [-h, --help]

6. Validate the install archive by running the command:

./install-noi-archive.sh --validate-archive

7. Install the images to an image registry by running the command:

./install-noi-archive.sh --user --password --namespace -- use-podman Install Netcool Operations Insight v1.6.1

registry...: image-registry.openshift-image-registry.svc namespace..: user...... : command....:

Enter y to continue:

It will take between 30 and 60 minutes to complete the install. Note: The defaults username is kubeadmin, the default password is the values from running /root/ auth/kubeadmin-password and the default namespace is default. Note: Podman is recommended for Red Hat OpenShift systems. 8. Unpack the Netcool Operations Insight operator archive by running the commands:

Chapter 3. Installing Netcool Operations Insight 121 tar -xvf noi-operator-1.0.0.tgz

cd noi-operator-1.0.0

./install.sh --help This script installs the Netcool Operations Insight operator Usage: ./install.sh [-n, --namespace] : namespace [-u, --user] : user [-p, --password] : password [-r, --registry] : image registry [-k, --kubectl] : use kubectl [-h, --help]

9. Install the Netcool Operations Insight operator Kubernetes artifacts by using the command:

./install.sh --user --password --namespace

Install Netcool Operations Insight operator v1.6.1

registry...: image-registry.openshift-image-registry.svc:5000 namespace..: user...... : command....:

Enter y to continue:

Note: The default namespace is default, the default user is the value obtained by running oc whoami, and the default password is the value obtained from running oc whoami -t. 10. Configure your deployment for either a full cloud or hybrid installation. a) For a full installation, edit the parameters in deploy/crds/noi.ibm.com_nois_cr.yaml with configuration values suitable for your deployment. For more information, see “Operator properties” on page 123. Then, run the following command:

oc apply -f deploy/crds/noi.ibm.com_nois_cr.yaml

b) For a hybrid installation, edit the properties in deploy/crds/ noi.ibm.com_noihybrids_cr.yaml with configuration values suitable for your deployment. For more information, see “Hybrid operator properties” on page 162. Then, run the following command:

oc apply -f deploy/crds/noi.ibm.com_noihybrids_cr.yaml

Note: Changing an existing deployment from a trial deployment type to a production deployment type is not supported. Note: The topology management capability is enabled by default. If you want to disable it, open your custom resource file and edit it by setting topology.enabled:false. The file is deploy/crds/ noi.ibm.com_nois_cr.yaml for a Red Hat OpenShift deployment and deploy/crds/ noi.ibm.com_noihybrids_cr.yaml for a hybrid deployment.

What to do next If you want to enable or disable a feature after your installation, you can edit the Netcool Operations Insight instance by running the command:

oc edit noi

Where is the name of the instance you want to edit. You can then select enable or disable the observer. Note: When you disable features at post install, the resource is not automatically deleted. To find out if the feature is deleted or not you need to check the operator log.

122 IBM Netcool Operations Insight: Integration Guide Operator properties Find out how to configure the parameters for your IBM Netcool Operations Insight installation. Table 27. Netcool Operations Insight on Red Hat OpenShift properties Property Description clusterDomain Use the fully qualified domain name (FQDN) to specify deploymentType Deployment type (trial or production) entitlementSecret Entitlement secret to pull images license.accept Agreement to license version Version advanced.antiAffinity To prevent primary and backup server pods from being installed on the same worker node, select this option. advanced.imagePullPolicy The default pull policy is IfNotPresent, which causes the kubelet to skip pulling an image that already exists. advanced.imagePullRepository Docker registry that all component images are pulled ldap.baseDN Configure the LDAP base entry by specifying the base distinguished name (DN). ldap.bindDN Configure LDAP bind user identity by specifying the bind distinguished name (bind DN). ldap.mode Choose (standalone) for a built-in LDAP server or (proxy) and connect to an external organization LDAP server. ldap.port Configure the port of your organization's LDAP server. ldap.sslPort Configure the SSL port of your organization's LDAP server. ldap.suffix Configure the top entry in the LDAP directory information tree (DIT). ldap.url Configure the URL of your organization's LDAP server. ldap.storageClass LDAP Storage Class ldap.storageSize LDAP Storage Size persistence.enabled Enable persistence storage persistence.storageClassCassandraBackup Storage Class Cassandra Backup persistence.storageClassCassandraData Storage Class Cassandra Data persistence.storageClassCouchdb Couchdb Storage Class persistence.storageClassDB2 DB2 Storage Class persistence.storageClassElastic Storage Class Elasticsearch persistence.storageClassImpactGUI Impact GUI Storage Class

Chapter 3. Installing Netcool Operations Insight 123 Table 27. Netcool Operations Insight on Red Hat OpenShift properties (continued) Property Description persistence.storageClassImpactServer Impact Server Storage Class persistence.storageClassKafka Storage Class Kafka persistence.storageClassNCOBackup NCOBackup Storage Class persistence.storageClassNCOPrimary NCO Primary Storage Class persistence.storageClassZookeeper Storage Class Zookeeper persistence.storageSizeCassandraBackup Storage Size Cassandra Backup persistence.storageSizeCassandraData Storage Size Cassandra Data persistence.storageSizeCouchdb Couchdb Storage Size persistence.storageSizeDB2 DB2 Storage Size persistence.storageSizeElastic Elasticsearch Storage Size persistence.storageSizeImpactGUI Impact GUI Storage Size persistence.storageSizeImpactServer Impact Server Storage Size persistence.storageSizeKafka Storage Size Kafka persistence.storageSizeNCOBackup NCOBackup Storage Size persistence.storageSizeNCOPrimary NCO Primary Storage Size persistence.storageSizeZookeeper Storage Size Zookeeper topology.appDisco.enabled Enable Application Discovery services and its observer topology.appDisco.dburl DB2 Host URL for Application Discovery topology.appDisco.dbsecret DB2 secret for Application Discovery topology.appDisco.secure Enable secure connection to DB2 Host URL for Application Discovery topology.appDisco.certSecret This secret must contain DB2 certificate by the name tls.crt Applicable only if db.secure property is true. topology.appDisco.dsStorageClass Storage class for persistent volume claims used by all discovery servers. Applicable only if persistence.enabled is true. topology.appDisco.dsStorageSize Size of persistent volume claims used by all discovery servers. Applicable only if persistence.enabled is true. topology.appDisco.dsDefaultExistingClai Name of existing persistent volume claim m (expected in same namespace) to be used by Discovery server. Applicable only if persistence.enabled is true. topology.appDisco.pssStorageClass Storage class for persistent volume claims used by primary storage server. Applicable only if persistence.enabled is true.

124 IBM Netcool Operations Insight: Integration Guide Table 27. Netcool Operations Insight on Red Hat OpenShift properties (continued) Property Description topology.appDisco.pssStorageSize Size of persistent volume claims used by primary storage server. Applicable only if persistence.enabled is true. topology.appDisco.pssExistingClaim Name of existing persistent volume claim (expected in same namespace) to be used by primary storage server. Applicable only if persistence.enabled is true. topology.appDisco.sssStorageClass Storage class for persistent volume claims used by all secondary storage servers. Applicable only if persistence.enabled is true. topology.appDisco.sssStorageSize Size of persistent volume claims used by all secondary storage servers topology.enabled Enable Topology topology.netDisco Enable Network Discovery services and its observer topology.observers.aws Enable AWS observer topology.observers.azure Enable Azure observer topology.observers.bigfixinventory Enable Bigfixinventory observer topology.observers.cienablueplanet Enable Cienablueplanet observer topology.observers.ciscoaci Enable Ciscoaci observer topology.observers.contrail Enable Contrail observer topology.observers.dns Enable DNS observer topology.observers.docker Enable Docker observer topology.observers.dynatrace Enable Dynatrace observer topology.observers.file Enable File observer topology.observers.googlecloud Enable Googlecloud observer topology.observers.ibmcloud Enable Ibmcloud observer topology.observers.itnm Enable ITNM observer topology.observers.jenkins Enable Jenkins observer topology.observers.junipercso Enable Junipercso observer topology.observers.kubernetes Enable Kubernetes observer topology.observers.newrelic Enable Newrelic observer topology.observers.openstack Enable Openstack observer topology.observers.rancher Enable Rancher observer topology.observers.rest Enable REST observer topology.observers.servicenow Enable Servicenow observer topology.observers.taddm Enable TADDM observer

Chapter 3. Installing Netcool Operations Insight 125 Table 27. Netcool Operations Insight on Red Hat OpenShift properties (continued) Property Description topology.observers.vmvcenter Enable Vmvcenter observer topology.observers.vmwarensx Enable Vmwarensx observer topology.observers.zabbix Enable Zabbix observer topology.storageClassCassandraBackupTop Storage class Cassandra Backup. Production Only ology topology.storageClassCassandraDataTopol Storage class Cassandra Data. Production Only ogy topology.storageClassElasticTopology Storage class Elasticsearch. Production Only topology.storageClassFileObserver File Observer Storage class. Production Only topology.storageSizeCassandraBackupTopo Storage size Cassandra Backup. Production Only logy topology.storageSizeCassandraDataTopolo Storage size Cassandra Data. Production Only gy topology.storageSizeElasticTopology Storage size Elasticsearch. Production Only topology.storageSizeFileObserver File Observer Storage size. Production Only

Installing the connection layer operator with the Operator Lifecycle Management console Learn how to install the connection layer operator with the Operator Lifecycle Management (OLM) console. Each connection layer operator establishes a connection to an additional ObjectServer in your environment.

Before you begin You must first deploy IBM Netcool Operations Insight on Red Hat OpenShift in your cloud or hybrid environment. For more information, see “Installing cloud native components (UI)” on page 158 and “Installing operators with the Operator Lifecycle Management console” on page 119. This installation connects an ObjectServer aggregation pair to a Netcool Operations Insight on OpenShift instance. Before you deploy the connection layer, create two secrets: 1. Create a secret to enable cloud native Netcool Operations Insight components to access your on- premises Operations Management ObjectServer.

kubectl create secret generic custom-resource-omni-secret --from- literal=OMNIBUS_ROOT_PASSWORD=omni_password --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. • omni_password is the root password for the on-premises Netcool/OMNIbus that you want to connect to. 2. Create a secret to enable SSL communication between the OMNIbus component of your on-premises Operations Management installation and the cloud native Netcool Operations Insight components. If you do not require an SSL connection, create the secret with blank entries. Complete the following steps to configure authentication:

126 IBM Netcool Operations Insight: Integration Guide a. Configure OMNIbus on your on-premises Operations Management installation to use SSL, if it is not doing so already. To check, run the command oc get secrets -n namespace and check if the secret custom_resource-omnicertificate-secret exists. If the secret does not exist and the OMNIbus components are using SSL, the following steps must be completed. b. Extract the certificate from your on-premises Operations Management installation.

$NCHOME/bin/nc_gskcmd -cert -extract -db "key_db" -pw password -label "cert_name" -target "ncomscert.arm"

Where • key_db is the name of the key database file. • password is the password to your key database. • cert_name is the name of your certificate. c. Copy the extracted certificate, ncomscert.arm, over to the infrastructure node of your Red Hat OpenShift cluster, or to the node on your cluster where the kubectl CLI is installed. d. Create a secret for the certificate.

kubectl create secret generic custom_resource-omni-certificate-secret --from- literal=PASSWORD=password --from-file=ROOTCA=certificate --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • password is a password of your choice. • certificate is the path and filename of the certificate that was copied to your cluster in the previous step, ncomscert.arm. • namespace is the name of the namespace into which you want to install the cloud native components. Note: If the ObjectServer is not named 'AGG_V', which is the default, then you must set the global.hybrid.objectserver.config.ssl.virtualPairName parameter when you configure the installation parameters later. For more information, see “Hybrid operator properties” on page 162.

About this task Learn about the properties that can be specified for each connection layer:

Table 28. Connection layer properties Property Description noiReleaseName Provide the release name to be associated with the ObjectServer properties. The noiReleaseName property is the release name of the hybrid or cloud instance that must be connected with the ObjectServer aggregation pair. objectServer.backupHost Hostname of the backup ObjectServer objectServer.backupPort Port number of the backup ObjectServer objectServer.deployPhase This setting determines when the OMNIbus Netcool Operations Insight on OpenShift schema is deployed objectServer.primaryHost Hostname of the primary ObjectServer objectServer.primaryPort Port number of the primary ObjectServer

Chapter 3. Installing Netcool Operations Insight 127 Table 28. Connection layer properties (continued) Property Description objectServer.sslRootCAName This property is used to specify the common name (CN) name for the certificate authority (CA) certificate objectServer.sslVirtualPairName This property is only needed when setting up an SSL connection objectServer.username User name for connecting to an on-premises ObjectServer

The operator has cluster scope permissions. It requires role-based access control (RBAC) authorization at a cluster level because it deploys and modifies Custom Resource Definitions (CRDs) and cluster roles.

Procedure 1. Log in to the OLM console with a URL of the following format:

https://console-openshift-console.apps./

Where is the host name of the master node. 2. To install a connection layer on your cloud architecture, select the Create Instance link under the hybrid or cloud custom resource. 3. Note: Set the objectServer.deployPhase property to install and do not change. Use the YAML or Form view and provide the required values to install a connection layer. For more information, see Table 28 on page 127. 4. Select the Create button and specify a release name for the connection layer. 5. Under the All Instances tab, a connection layer instance appears. View the status for updates on the installation. When the instance state shows OK then the connection layer is fully deployed.

What to do next Deploy a connection layer for each separate aggregation pair that you want to connect to a single Netcool Operations Insight on OpenShift instance.

Installing the connection layer operator with the CLI Learn how to install the connection layer operator with the command line interface (CLI). Each connection layer operator establishes a connection to an additional ObjectServer in your environment.

Before you begin You must first deploy IBM Netcool Operations Insight on Red Hat OpenShift in your cloud or hybrid environment. For more information, see “Installing operators with the CLI using PPA” on page 120 and “Installing cloud native components (CLI)” on page 159. This installation connects an ObjectServer aggregation pair to a Netcool Operations Insight on OpenShift instance. Before you deploy the connection layer, create two secrets: 1. Create a secret to enable cloud native Netcool Operations Insight components to access your on- premises Operations Management ObjectServer.

kubectl create secret generic custom-resource-omni-secret --from- literal=OMNIBUS_ROOT_PASSWORD=omni_password --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install).

128 IBM Netcool Operations Insight: Integration Guide • namespace is the name of the namespace into which you want to install the cloud native components. • omni_password is the root password for the on-premises Netcool/OMNIbus that you want to connect to. 2. Create a secret to enable SSL communication between the OMNIbus component of your on-premises Operations Management installation and the cloud native Netcool Operations Insight components. If you do not require an SSL connection, create the secret with blank entries. Complete the following steps to configure authentication: a. Configure OMNIbus on your on-premises Operations Management installation to use SSL, if it is not doing so already. To check, run the command oc get secrets -n namespace and check if the secret custom_resource-omnicertificate-secret exists. If the secret does not exist and the OMNIbus components are using SSL, the following steps must be completed. b. Extract the certificate from your on-premises Operations Management installation.

$NCHOME/bin/nc_gskcmd -cert -extract -db "key_db" -pw password -label "cert_name" -target "ncomscert.arm"

Where • key_db is the name of the key database file. • password is the password to your key database. • cert_name is the name of your certificate. c. Copy the extracted certificate, ncomscert.arm, over to the infrastructure node of your Red Hat OpenShift cluster, or to the node on your cluster where the kubectl CLI is installed. d. Create a secret for the certificate.

kubectl create secret generic custom_resource-omni-certificate-secret --from- literal=PASSWORD=password --from-file=ROOTCA=certificate --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • password is a password of your choice. • certificate is the path and filename of the certificate that was copied to your cluster in the previous step, ncomscert.arm. • namespace is the name of the namespace into which you want to install the cloud native components. Note: If the ObjectServer is not named 'AGG_V', which is the default, then you must set the global.hybrid.objectserver.config.ssl.virtualPairName parameter when you configure the installation parameters later. For more information, see “Hybrid operator properties” on page 162.

About this task Learn about the properties that can be specified for each connection layer:

Table 29. Connection layer properties Property Description noiReleaseName Provide the release name to be associated with the ObjectServer properties. The noiReleaseName property is the release name of the hybrid or cloud instance that must be connected with the ObjectServer aggregation pair. objectServer.backupHost Hostname of the backup ObjectServer

Chapter 3. Installing Netcool Operations Insight 129 Table 29. Connection layer properties (continued) Property Description objectServer.backupPort Port number of the backup ObjectServer objectServer.deployPhase This setting determines when the OMNIbus Netcool Operations Insight on OpenShift schema is deployed objectServer.primaryHost Hostname of the primary ObjectServer objectServer.primaryPort Port number of the primary ObjectServer objectServer.sslRootCAName This property is used to specify the common name (CN) name for the certificate authority (CA) certificate objectServer.sslVirtualPairName This property is only needed when setting up an SSL connection objectServer.username User name for connecting to an on-premises ObjectServer

The operator has cluster scope permissions. It requires role-based access control (RBAC) authorization at a cluster level because it deploys and modifies Custom Resource Definitions (CRDs) and cluster roles. Create and deploy a custom resource for the connection layer by completing the following steps:

Procedure 1. Note: Specify a unique release name for each connection layer. For each connection layer, create the custom resource by editing the parameters in the deploy/ crds/.yamlfile, where is the name of your custom resource YAML file for your cloud or hybrid deployment. Specify the connection layer release name and the ObjectServer details. For more information, see Table 29 on page 129. 2. Run the following command:

kubectl apply -f deploy/crds/.yaml

What to do next Deploy a connection layer for each separate aggregation pair that you want to connect to a single Netcool Operations Insight on OpenShift instance.

Post-installation tasks Perform the following tasks to configure your IBM Netcool Operations Insight release. Enable the launch-in-context menu to start manual or semi-automated runbooks from events for your cloud IBM Netcool Operations Insight on Red Hat OpenShift deployment. For more information, see “Enabling runbook automation for your IBM Netcool Operations Insight on Red Hat OpenShift deployment” on page 249. Install Netcool/Impact and complete post-installation tasks. For more information, see “Installing Netcool/Impact to run the trigger service” on page 249 and “Postinstallation of Netcool/Impact V7.1.0.18 or higher” on page 250. Create triggers and link events with the runbooks. For more information, see “Triggers” on page 400.

130 IBM Netcool Operations Insight: Integration Guide Retrieving passwords from secrets After a successful installation of IBM Netcool Operations Insight, passwords can be retrieved from the secrets that contain them.

About this task To retrieve a password from Netcool Operations Insight, use the following procedure. icpadmin password

kubectl get secret custom_resource-icpadmin-secret -o json -n namespace | grep ICP_ADMIN_PASSWORD | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo sysadmin password

kubectl get secret custom_resource-was-secret -o json -n namespace | grep WAS_PASSWORD | cut - d : -f2 | cut -d '"' -f2 | base64 -d;echo impact admin password

kubectl get secret custom_resource-impact-secret -o json -n namespace | grep IMPACT_ADMIN_PASSWORD | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo omnibus password

kubectl get secret custom_resource-omni-secret -o json -n namespace | grep OMNIBUS_ROOT_PASSWORD | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo couchdb password

kubectl get secret custom_resource-couchdb-secret -o json -n namespace | grep password | cut - d : -f2 | cut -d '"' -f2 | base64 -d;echo

LDAP admin password

kubectl get secret custom_resource-ldap-secret -o json -n namespace | grep LDAP_BIND_PASSWORD | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace in which Netcool Operations Insight is installed.

Creating users In Netcool Operations Insight on OpenShift, all users must be created either in the openLDAP user interface if you are running on the standalone LDAP pod that comes with Netcool Operations Insight on OpenShift, or on your enterprise LDAP server if you are running the LDAP proxy option. Users must not be created in other Netcool Operations Insight components as these users will not be recognized in Netcool Operations Insight on OpenShift.

About this task For more information on how to create users using the openLDAP user interface, see “Administering users” on page 353.

Chapter 3. Installing Netcool Operations Insight 131 Changing passwords and recreating secrets Changes to any of the passwords used by IBM Netcool Operations Insight will require the secrets that use those passwords to be recreated, and the pods that use those secrets to be restarted. Use the following procedure if you need to change any of these passwords.

Procedure Use this table to help you identify the secret that uses a password, and the pods that use a secret.

Password Corresponding secret Dependent pods

smadmin custom_resource-was-secret custom_resource-webgui-0 custom_resource- ea-noi-layer- eanoiactionservice custom_resource-ea-noi-layer- eanoigateway custom_resource-ibm-hdm- common-ui-uiserver

impactadmin custom_resource-impact-secret custom_resource-impactgui-0 custom_resource- nciserver-0 custom_resource- nciserver-1 custom_resource-webgui-0

icpadmin custom_resource-icpadmin- none secret

OMNIbus root custom_resource-omni-secret custom_resource-webgui-0 custom_resource-ea-noi-layer- eanoiactionservice custom_resource-ea-noi-layer- eanoigateway custom_resource-ibm-hdm- analytics-dev- aggregationnormalizerservice custom_resource-ncobackup custom_resource-ncoprimary custom_resource-nciserver-0 custom_resource-nciserver-1

LDAP admin custom_resource-ldap-secret custom_resource-openldap-0 custom_resource-impactgui-0 custom_resource-nciserver-0 custom_resource-nciserver-1 custom_resource-ncobackup custom_resource-ncoprimary custom_resource-scala custom_resource-webgui-0

132 IBM Netcool Operations Insight: Integration Guide Password Corresponding secret Dependent pods couchdb custom_resource-couchdb-secret custom_resource-couchdb custom_resource-ibm-hdm- analytics-dev- aggregationcollaterservice custom_resource-ibm-hdm- analytics-dev-trainer internal password for inter pod custom_resource-ibm-hdm- custom_resource-ibm-hdm- communication common-ui-session-secret common-ui-uiserver internal password custom_resource-systemauth- custom_resource-couchdb secret custom_resource-ibm-hdm- analytics-dev- aggregationcollaterservice custom_resource-ibm-hdm- analytics-dev-trainer hdm custom_resource-cassandra- custom_resource-cassandra auth-secret redis custom_resource-ibm-redis- custom_resource-ibm-hdm- authsecret analytics-dev-collater- aggregationservice custom_resource-ibm-hdm- analytics-dev-dedup- aggregationservice kafka custom_resource-kafka-admin- custom_resource-ibm-hdm- secret analytics-dev-archivingservice custom_resource-ibm-hdm- analytics-dev-collater- aggregationservice custom_resource-ibm-hdm- analytics-dev-dedup- aggregationservice custom_resource-ibm-hdm- analytics-dev-inferenceservice custom_resource-ibm-hdm- analytics-dev-ingestionservice custom_resource-ibm-hdm- analytics-dev-normalizer- aggregationservice admin custom_resource-kafka-client- custom_resource-ibm-hdm- secret analytics-dev-archivingservice custom_resource-ibm-hdm- analytics-dev-collater- aggregationservice

Chapter 3. Installing Netcool Operations Insight 133 Password Corresponding secret Dependent pods

custom_resource-ibm-hdm- analytics-dev-dedup- aggregationservice custom_resource-ibm-hdm- analytics-dev-inferenceservice custom_resource-ibm-hdm- analytics-dev-ingestionservice custom_resource-ibm-hdm- analytics-dev-normalizer- aggregationservice

Where custom_resource is the name that you will use for your Netcool Operations Insight deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). To change a password use the following procedure. 1. Change the password that you wish to change. 2. Use the table at the start of this topic to find the secret that corresponds to the password that has been changed, and delete this secret.

kubectl delete secret secretname --namespace namespace

Where • secretname is the name of the secret to be recreated. • namespace is the name of the namespace in which the secret to be recreated exists. 3. Recreate the secret with the desired new password. 4. Use the table at the start of this topic to find which pods depend on the secret that you have recreated and will require restarting. 5. Restart the required pods using

kubectl delete pod podname

Where podname is the name of the pod that requires restarting.

Using custom certificates for routes Red Hat OpenShift automatically generates TLS certificates for external routes, but you can use your own certificate instead. Learn how to update external routes to use custom certificates on OpenShift. Internal TLS microservice communications are not affected. You can update the OpenShift ingress to use a custom certificate for all external routes across the cluster. For more information, see https://docs.openshift.com/container-platform/4.4/authentication/certificates/ replacing-default-ingress-certificate.html. If required, you can add a custom certificate for a single external route. For more information, see https:// docs.openshift.com/container-platform/4.4/networking/routes/secured-routes.html.

Uninstalling operators Follow these instructions to learn how to uninstall operators. You can uninstall the operators using the Operator Lifecycle Management (OLM) or using the command line. If you installed Netcool Operations Insight using an OLM install, follow “Uninstalling using Operator Lifecycle Management” on page 135, if you installed it with the command line, follow “Uninstalling using the command line” on page 136.

134 IBM Netcool Operations Insight: Integration Guide Uninstalling using Operator Lifecycle Management 1. Connect to the Openshift environment under the namespace where Netcool Operations Insight is installed and delete the Netcool Operations Insight operator instance name by running this command:

oc delete noi

2. Delete Netcool Operations Insight operator. Go to Operator > Installed Operators under the Navigation page on the OpenShift console, select your Project and select Uninstall the Netcool Operations Insight Operator 3. Search for the Custom Resource Definitions (CRD) created by the Netcool Operations Insight installation and delete them by running the command:

oc get crd | egrep "noi|cem|asm" oc delete crd

For example:

oc get crd |egrep "noi|cem|asm" asmformations.asm.ibm.com 2020-05-17T19:44:53Z asmhybrids.asm.ibm.com 2020-05-17T19:44:53Z asms.asm.ibm.com 2020-05-17T19:44:53Z cemformations.cem.ibm.com 2020-05-17T19:44:53Z cemserviceinstances.cem.ibm.com 2020-05-17T19:44:53Z noiformations.noi.ibm.com 2020-05-17T19:42:28Z noihybridlayers.noi.ibm.com 2020-05-17T19:42:28Z noihybrids.noi.ibm.com 2020-05-17T19:42:28Z nois.noi.ibm.com 2020-05-17T19:42:28Z

Delete all the CRDs that start with noi, asm and cem. 4. Search the secrets created by Netcool Operations Insight and delete them by running the command:

oc get secrets |grep oc delete secrets

For example, find all the secrets created by the Netcool Operations Insight operator instance name noi:

oc get secrets |grep noi noi-asm-credentials Opaque 2 22h noi-ca-secret Opaque 4 22h noi-cassandra-auth-secret Opaque 2 22h noi-cem-brokers-cred-secret Opaque 2 22h noi-cem-cemusers-cred-secret Opaque 6 22h noi-cem-channelservices-cred-secret Opaque 2 22h noi-cem-couchdb-cred-secret Opaque 3 22h noi-cem-download-secret Opaque 1 22h noi-cem-email-auth-secret Opaque 3 22h noi-cem-event-analytics-ui-session-secret Opaque 1 22h noi-cem-ibm-redis-cred-secret Opaque 1 22h noi-cem-intctl-hmac-secret Opaque 2 22h noi-cem-integrationcontroller-cred-secret Opaque 2 22h noi-cem-model-secret Opaque 4 22h noi-cem-nexmo-auth-secret Opaque 2 22h noi-cem-service-secret Opaque 0 22h noi-couchdb-secret Opaque 3 22h noi-custom-secrets Opaque 1 22h noi-ibm-hdm-common-ui-session-secret Opaque 1 22h noi-ibm-redis-authsecret Opaque 2 22h noi-icpadmin-secret Opaque 1 22h noi-impact-secret Opaque 1 22h noi-kafka-admin-secret Opaque 2 22h noi-kafka-client-secret Opaque 2 22h noi-ldap-secret Opaque 1 22h noi-omni-certificate-secret Opaque 3 22h noi-omni-secret Opaque 1 22h noi-rba-devops-secret Opaque 2 22h noi-rba-jwt-secret Opaque 1 22h noi-registry-secret kubernetes.io/dockerconfigjson 1 30h noi-systemauth-secret Opaque 2 22h noi-was-oauth-cnea-secrets Opaque 2 22h noi-was-secret Opaque 1 22h

Chapter 3. Installing Netcool Operations Insight 135 noi-zk-kafka-client-secret Opaque 2 22h [root@scao-ocp44-inf noioper]#

5. Delete the Persistence Volume Claims, Persistence Volumes and storage class.

Uninstalling using the command line 1. Delete the operator by running the following commands:

oc delete deployment noi-operator oc delete delete role noi-operator oc delete delete rolebinding noi-operator oc delete clusterrole noi-operator oc delete clusterrolebinding noi-operator-cluster oc delete serviceaccount noi-operator oc delete crd noiformations.noi.ibm.com oc delete crd noihybridlayers.noi.ibm.com oc delete crd noihybrids.noi.ibm.com oc delete crd nois.noi.ibm.com

2. Repeat steps 3, 4, and 5 from “Uninstalling using Operator Lifecycle Management” on page 135. 3. Remove any routes, configmaps and secrets that are left behind after completing steps 3, 4, and 5 in “Uninstalling using Operator Lifecycle Management” on page 135.

Deploying on a hybrid architecture Use these instructions to prepare and install a hybrid deployment, which is composed of cloud native Netcool Operations Insight components on Red Hat OpenShift, and an on-premises Operations Management installation. Click here to download the Netcool Operations Insight Hybrid Installation Guide. This hybrid configuration leverages the power of Netcool Operations Insight's cloud native components, without the need to deploy the full Netcool Operations Insight on Red Hat OpenShift solution. You can deploy only the cloud native Netcool Operations Insight components from the Netcool Operations Insight package on Red Hat OpenShift, and connect them to an on-premises Operations Management installation. The cloud native Netcool Operations Insight components provide cloud native event analytics, cloud native event integrations, runbook automations, and optionally topology management. The on-premises Operations Management installation provides the ObjectServer and Web GUI(s). The hybrid solution can be deployed in high availability (HA) mode, or non-HA mode. • HA mode: A HA hybrid deployment is composed of cloud native Netcool Operations Insight components on Red Hat OpenShift and an on-premises Operations Management installation that has multiple WebGUI instances to provide redundancy. Some extra deployment steps, which are tagged with , are required for a HA hybrid deployment. • Non-HA mode: A non-HA hybrid deployment is composed of cloud native Netcool Operations Insight components on Red Hat OpenShift and an on-premises Operations Management installation that has only one WebGUI. Omit the steps tagged with for a non-HA hybrid deployment. Note: Integration with on-premises IBM Agile Service Manager is not supported for hybrid deployments.

Preparing Follow these instructions to prepare a hybrid Netcool Operations Insight deployment.

Preparing on-premises Operations Management Prepare an on-premises Operations Management installation, in which Event Analytics is disabled.

Before you begin The following requirements are met:

136 IBM Netcool Operations Insight: Integration Guide • The primary and backup ObjectServers in the on-premises Operations Management installation are running, and are listening on external IP addresses. Note: Integration with on-premises IBM Agile Service Manager is not supported for hybrid deployments.

Procedure 1. Install on-premises Operations Management. If Operations Management V1.6.1 is not already installed, then install it, or upgrade to it. For more information, see “Installing Operations Management on premises” on page 47. 2. Disable Event Analytics. In a hybrid installation, the on-premises Event Analytics capability must be disabled before the cloud native Netcool Operations Insight components are installed. a) Remove the ncw_analytics_admin role from each of your users. 1) Select Console Settings->User Roles and select your user from the users who are listed in Available Users. 2) Remove the role ncw_analytics_admin for your user and save the changes. 3) Repeat for each of your users, and then log out and back in again. b) Delete theImpactUI data provider from DASH. 1) Open Console Settings->Connections. 2) Find the entry for ImpactUI and delete it. c) Remove the ObjectServer source for Event Analytics from the Impact data model. 1) Log in to the Netcool/Impact UI with a URL in the following format https:// impact_host:impact_port/ibm/console. 2) In the Netcool/Impact UI, from the list of available projects, select the NOI project. 3) Select the Data Model tab, and then ObjectServerForNOI. 4) Remove the value in the Password field, and then change the Host Name for the Primary Source and Backup Source, so that they do not point to an ObjectServer.

3. Check available space on the ObjectServer. The cloud native analytics service, installed as part of the hybrid deployment, introduces a number of new columns to the Netcool/OMNIbus ObjectServer, which take up space. Before starting your cloud native Netcool Operations Insight components deployment, ensure that you have sufficient space on the ObjectServer for these columns. The ObjectServer has a fixed row size of 64 kilobytes. The standard ObjectServer columns use up some of this space, and as you add custom fields, or integrate products such as IBM Control Desk, IBM Operations Analytics - Predictive Insights, and IBM Tivoli Network Manager, more of this space is used up. a) From the command line, launch the ObjectServer SQL interface, nco_sql, by running the following command.

bin/nco_sql -user root -server server_name

Where server_name is the name of the ObjectServer. b) At the prompt, specify the root password. c) In the ObjectServer SQL interface, issue the following command.

describe status; go

The system returns a listing of the ObjectServer alerts.status columns, together with columns indicating the type, size in bytes, and key for each column. d) Save the output as a text file, and then import it into a spreadsheet program of your choice, so that each column in the text file generates a separate column in the resulting spreadsheet.

Chapter 3. Installing Netcool Operations Insight 137 e) Use a spreadsheet formula to calculate your ObjectServer row size in kilobytes. f) Subtract the value determined in the previous step from the 64 kilobytes maximum row size. This is the available space in your ObjectServer row. g) The cloud native analytics service introduces a number of new columns to the ObjectServer that total 8 kilobytes. If you do not have sufficient space in your ObjectServer row, then you should consider removing any redundant custom columns.

Sizing for a hybrid deployment Learn about the sizing requirements for a hybrid deployment on Red Hat OpenShift.

Hardware sizing for a hybrid deployment on OpenShift

Table 30. Minimum recommendations for a multi-node cluster hybrid deployment Category Resource Demo/PoC/Trial Production General sizing information Event Management sizing information Event Rate Throughput Steady state events per 20 50 second Burst rate events per 100 500 second Topology Management sizing information System size Approx. resources 200,000 1,000,000 Event Rate Throughput Steady state events per 10 50 second Burst rate events per 10 200 second Environment options Container size Trial Production High availability No Yes Total requirements Netcool Operations Insight including Red Hat OpenShift and infrastructure nodes Total number of node(s) Minimum nodes count 7 10 Recommended nodes 8 11 count x86 CPUs (minimum) 52 110 x86 CPUs 60 112 (recommended) Memory (GB) (minimum) 104 296 Memory (GB) 120 304 (recommended) Disk (GB) (minimum) 1300 3200 Disk (GB) 1400 3700 (recommended) Persistent storage 830 2760 requirements (Gi)

138 IBM Netcool Operations Insight: Integration Guide Table 30. Minimum recommendations for a multi-node cluster hybrid deployment (continued) Category Resource Demo/PoC/Trial Production General sizing information Event Management sizing information Total disk IOPS 1360 7750 requirements Total requirements Netcool Operations Insight services only Total number of node(s) Minimum nodes count 5 6 Recommended nodes 6 6 count x86 CPUs (minimum) 40 96 x86 CPUs 48 96 (recommended) Memory (GB) (minimum) 80 240 Memory (GB) 96 240 (recommended) Disk (GB) (minimum) 500 1800 Disk (GB) 600 1800 (recommended) Persistent storage 825 2755 requirements (Gi) Total disk IOPS 1360 7750 requirements Hardware sizing requirements

OpenShift Minimum nodes count 1 1 infrastructure node Recommended nodes 1 2 CPU cores, disk and count memory requirements x86 CPUs 2 2 Memory (GB) 8 8 Disk (GB) 500 500

OpenShift control Minimum nodes count 1 3 plane (master) node(s) Recommended nodes 1 3 count CPU cores, disk and x86 CPUs 4 4 memory requirements Memory (GB) 16 16 Disk (GB) 300 300 Netcool Operations Insight components - suggested configuration

Chapter 3. Installing Netcool Operations Insight 139 Table 30. Minimum recommendations for a multi-node cluster hybrid deployment (continued) Category Resource Demo/PoC/Trial Production General sizing information Event Management sizing information

OpenShift compute Minimum nodes count 5 6 (worker) nodes Recommended node 6 6 count CPU cores, disk and x86 CPUs 8 16 memory requirements Memory (GB) 16 40 Disk (GB) 100 300 Persistent storage Cassandra 200 1500 minimum requirements (Gi) Kafka 150 300 Zookeeper 15 30 Elasticsearch 450 900 File-observer 5 10 CouchDB 5 15 ImpactGUI 5 5 Persistent storage Cassandra 600 2400 recommended IOPS requirements Kafka 300 2400 Zookeeper 50 150 Elasticsearch 300 2400 File-observer 50 100 ImpactGUI 100 200 CouchDB 60 300

Storage Create storage before your installation. Note: If you want to deploy IBM Netcool Operations Insight on Red Hat OpenShift on a cloud platform, such as RedHat OpenShift Kubernetes Service (ROKS), assess your storage requirements. Netcool Operations Insight on container platforms supports three storage options: local storage, Rook Ceph storage and vSphere storage. Support for Rook Ceph storage was added in V1.6.0.2 for Netcool Operations Insight on OpenShift.

Configure storage Security Context Constraint (SCC) Before configuring storage you need to determine and declare your storage SCC for a chart running in a non-root environment across a number of storage solutions. For more information about how to secure your storage environment, see the OpenShift documentation: https://docs.openshift.com/container- platform/4.4/authentication/managing-security-context-constraints.html The table shows the information about persistent volumes along with the expected size and access mode for a full deployment. For more information about storage access mode, see the Cloud Pak documentation: https://playbook.cloudpaklab.ibm.com/cm/storage-accessmodes

140 IBM Netcool Operations Insight: Integration Guide Name Trials Production Trial size Recommend Access User fsGroup per ed mode replica production size per replica cassandra 1 3 50Gi 150Gi ReadWrit 1001 2001 eOnce cassandra-bak 1 3 50Gi 100Gi ReadWrit 1001 2001 eOnce cassandra- 1 3 50Gi 150Gi ReadWrit 1001 2001 topology eOnce cassandra- 1 3 50Gi 100Gi ReadWrit 1001 2001 topology-bak eOnce kafka 3 6 50Gi 50Gi ReadWrit 1001 2001 eOnce zookeeper 1 3 5Gi 10Gi ReadWrit 1001 2001 eOnce couchdb 1 3 5Gi 5Gi ReadWrit 1001 2001 eOnce elasticsearch 1 3 75Gi 75Gi ReadWrit 1001 2001 eOnce elastic search- 1 3 75Gi 75Gi ReadWrit 1000 1000 topology eOnce fileobserver 1 1 5Gi 5Gi ReadWrit 1001 2001 eOnce

Configuring Rook Ceph storage More information can be found on the Rook Ceph container storage website, https://rook.io/ . The following list is a high-level summary of the tasks to be completed: • Attach a disk of an appropriate size to each compute node on your virtualization console. • Clone the Rook Storage Orchestrator for Kubernetes Git repository and prepare the relevant .yaml files for Rook storage operator and cluster deployment. • Create the Rook Ceph Storage Cluster. • Create the RADOS Block Device (RBD) storage class. Optional tasks are: • Access the Rook Ceph Dashboard. • Use the Rook Ceph toolbox for cluster diagnostics and health checks. • Deploy a test application to use the RBD Storage Class. For the step by step Rook Caph storage configuration procedure, click this link from the IT Operations Management Developer Center: https://../..//docs/configuring-red-hat-ceph-storage/.

Configuring persistent volumes with local storage You can use local storage for trial, demo, or development systems. In addition, it is a supported production configuration for topology management. For trial or development systems, you can download the createStorageAllNodes.sh script from the IT Operations Management Developer Center http://ibm.biz/local_storage_script .

Chapter 3. Installing Netcool Operations Insight 141 Note: This script creates local storage for Netcool Operations Insight only. It does not create storage for the cloud event management or for the topology management services. The script facilitates the creation of persistent storage volumes (PVs) using local storage. The PVs are mapped volumes, which are mapped to directories off the root file system on the parent node. The script also generates example SSH scripts that create the directories on the local file system of the node. The SSH scripts create directories on the local hard disk associated with the virtual machine and are only suitable for proof of concept or development work. Local storage is not recommended for production environments, as the loss of a worker node may result in the loss of locally stored data. You could overcome some of these limitations with a file system with the required performance, scale, and redundancy requirements. For production environments that wish to use local-storage you will need to use the Red Hat OpenShift recommended method in this link: • https://docs.openshift.com/container-platform/4.4/storage/persistent_storage/persistent-storage- local.html Note: If local storage is used, the noi-cassandra-* and noi-cassandra-bak-* PVs must be on the same local node. Cassandra pods will fail to bind to their PVCs if this requirement is not met.

Configuring vSphere storage If you want to use vSphere storage, consult the Red Hat OpenShift documentation: https:// docs.openshift.com/container-platform/4.2/storage/persistent-storage/persistent-storage-vsphere.html

Preparing your cluster Prepare your cluster for the deployment of cloud native Netcool Operations Insight components on Red Hat OpenShift.

Operating System Requirements For operating system and other detailed system requirements, search for the latest version of the Netcool Operations Insight product in the Software Product Compatibility Reports website: https://www.ibm.com/ software/reports/compatibility/clarity/softwareReqsForProduct.html .

Architecture The hardware architecture on which the cloud native Netcool Operations Insight components are installed must be AMD64. Kubernetes can have a mixture of worker nodes with different architectures, like AMD64, s390x (Linux on System z), and ARM8.

Pod Deployment and High Availabilty (HA) The deployment of pods across worker nodes is managed by Red Hat OpenShift. Pods for a service are deployed on nodes that meet the specification and affinity rules that are defined in the service's YAML file. Kubernetes automatically restarts pods if they fail.

Cluster Preparation Follow the steps in the table to prepare your cluster.

142 IBM Netcool Operations Insight: Integration Guide Table 31. Preparing your cluster Item Action More information

1 Provision the required virtual machines. The hardware architecture on which Netcool Operations Insight is installed must be AMD64. Kubernetes can have a mixture of worker nodes with different architectures, like AMD64, s390x (Linux on System z), and ARM8.

2 Download and install Red Hat For Red Hat OpenShift OpenShift. documentation, see https:// access.redhat.com/ documentation/en-us/ openshift_container_platform/4. 3/ For Red Hat OpenShift videos, see: https://www.youtube.com/ user/rhopenshift/videos

3 Optional: Install IBM Cloud For more information, see: Platform Common Services. • https://www.ibm.com/support/ Install of kubectl is optional. knowledgecenter/en/SSHKN6/ Red Hat OpenShift oc command kc_welcome_cs.html can be used in place of kubectl. • https://docs.openshift.com/ container-platform/4.4/ cli_reference/openshift_cli/ usage-oc-kubectl.html • Kubernetes CLI commands: Kubernetes documentation: Overview

4 Ensure that you have access to an administrator account on the target Red Hat OpenShift cluster. Netcool Operations Insight must be installed by a user with administrative access on the cluster.

Chapter 3. Installing Netcool Operations Insight 143 Table 31. Preparing your cluster (continued) Item Action More information

5 Create a custom namespace to deploy into.

oc create namespace namespace

where namespace is the name of the custom namespace that you want to create. Optional: If you want multiple independent installations of Netcool Operations Insight within the cluster, then create multiple namespaces within your cluster. Run each installation in a separate namespace. Note: Additional disk space and worker nodes are required to support multiple installations.

6 Ensure you have access to the For more information, see Netcool Operations Insight for “V1.6.1 Product and component Red Hat OpenShift installation version matrix” on page 9 package. 7 If you load the PPA package, skip this step. Otherwise, obtain the entitlement key that is assigned to your ID. 1. Log in to https:// myibm.ibm.com/products- services/containerlibrary with the IBMid and password that are associated with the entitled software. 2. Select Copy key to copy the entitlement key to the clipboard, in the Entitlement keys section. 3. Log in to the entitled registry by running the command:

docker login cp.icr.io -- username -- password

144 IBM Netcool Operations Insight: Integration Guide Table 31. Preparing your cluster (continued) Item Action More information 8 If you load the PPA package, skip this step. Otherwise, you can create a Docker registry secret to enable your deployment to pull the Netcool Operations Insight archive from your local Docker registry into your cluster's namespace.

oc create secret docker- registry noi-registry- secret --docker- username=docker_user --docker- password=docker_pass --docker-server=docker- reg.reg_namespace.svc:5000/ namespace

Where: • noi-registry-secret is the name of the secret that you are creating. Suggested value is noi-registry-secret • docker_user is the Docker user, for example kubeadmin • docker_pass is the Docker password • docker-reg is your Docker registry • reg_namespace is the namespace that your Docker registry is in. • namespace is the name of the namespace that you want to deploy to. Note: Warning: creating a secret manually this way results in a temporary password that will expire, which can lead to operational issues. For more information, see “Pods cannot restart if secret expires” on page 663. Make a note of the name of the secret that you specified for noi- registry-secret as you will need to specify it later during installation.

Chapter 3. Installing Netcool Operations Insight 145 Table 31. Preparing your cluster (continued) Item Action More information 9 In some Red Hat OpenShift For more information, click environments an additional https://docs.openshift.com/ configuration is required to allow container-platform/4.4/ external traffic to reach the networking/configuring- routes. This is due to the required networkpolicy.html and read addition of network policies to section 3 secure pod communication traffic. For example, if you are attempting to access a route which returns a "503 Application Not Available" error, a network policy may be blocking the traffic. Ensure that your Red Hat OpenShift environment is updated to allow network policies to function correctly. To do that, check if the ingresscontroller is configured with the endpointPublishingStrateg y: HostNetwork value by running the command:

oc get ingresscontroller default -n openshift- ingress-operator -o yaml

When endpointPublishingStrateg y.type is set to HostNetwork, the network policy will not work against routes unless the default namespace contains the selector label. To allow traffic, add a label to the default namespace by running the command:

oc patch namespace default --type=json -p '[{"op":"add","path":"/ metadata/labels","value": {"network.openshift.io/ policy-group":"ingress"}}]'

10 Create ClusterRole and Click https://www.ibm.com/ ClusterRoleBinding for topology support/knowledgecenter/ management observers. SS9LQB_1.1.8/LoadingData/ t_asm_loadingviakubernetes.htm l to learn how to perform this task.

146 IBM Netcool Operations Insight: Integration Guide Table 31. Preparing your cluster (continued) Item Action More information 11 Run this command to ensure the system health job is not showing any errors:

NAMESPACE=netcool SERVICE_ACCOUNT=noi-service- account cat <

Chapter 3. Installing Netcool Operations Insight 147 Table 31. Preparing your cluster (continued) Item Action More information 12 The operator requires a Security For more information about Context Constraints (SCC) to be storage, see “Storage” on page bound to the target service 105 account prior to installation. All pods will use this SCC. An SCC constrains the actions a pod can perform. Actions such as Host IPC access. Host IPC access is required by our Db2 pod. You can choose either: • Predefined SCC privileged. • Custom SCC. An example of Custom SCC definition is:

apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: ibm-noi-scc allowHostDirVolumePlugin: false allowHostIPC: true allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false defaultAddCapabilities: [] allowedCapabilities: - SETPCAP - AUDIT_WRITE - CHOWN - NET_RAW - DAC_OVERRIDE - FOWNER - FSETID - KILL - SETUID - SETGID - NET_BIND_SERVICE - SYS_CHROOT - SETFCAP - IPC_OWNER - IPC_LOCK - SYS_NICE - DAC_OVERRIDE priority: 0 fsGroup: type: RunAsAny readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret

148 IBM Netcool Operations Insight: Integration Guide Table 31. Preparing your cluster (continued) Item Action More information 13 Before your installation, create Service Account and add SCC to it by running the following command:

oc create serviceaccount noi-service-account -n

oc adm policy add-scc-to- user privileged system:serviceaccount::noi-service-account

oc patch serviceaccount default -p '{"imagePullSecrets": [{"name": "noi-registry- secret"}]}'

Where namespace is the namespace for the project that you are deploying to.

Configuring authentication Use this section to configure authentication between the components of your hybrid installation, using secrets to contain passwords and ConfigMaps to contain self-signed certificates.

Overview of required secrets and ConfigMaps Use this information to learn about the secrets and ConfigMaps that are required for communication between services that are running on-premises, and services that are running in the cloud. The secrets that are required to access on-premises services must be created manually. The secrets that are required to access cloud services can be created manually, or can be generated automatically by the installer.

Users requiring Secret/ConfigMap Data keys in Gives access to a Creation methods password secret service that runs on cloud or on- premises.

OMNIbus root user custom_resource- omni_password On-premises Manual omni-secret

WebSphere admin custom_resource- was_password On-premises Manual user was-secret OAuth client ID custom_resource- client_id On-premises Manual was-oauth-cnea- client_secret secrets A configMap, On-premises. Manual named by the user, containing the WebSphere certificate. custom_resource- password On-premises Manual omni-certificate- secret

Chapter 3. Installing Netcool Operations Insight 149 Users requiring Secret/ConfigMap Data keys in Gives access to a Creation methods password secret service that runs on cloud or on- premises. couchdb custom_resource- password Cloud Manual or couchdb-secret username=root automatic secret=couchdb hdm custom_resource- username Cloud Manual or cassandra-auth- password automatic secret redis custom_resource- username Cloud Manual or ibm-redis- password automatic authsecret kafka custom_resource- username Cloud Manual or kafka-admin- password automatic secret admin custom_resource- username Cloud Manual or kafka-client-secret password automatic

Where custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install).

Creation of ConfigMap for access to on-premises WebSphere Application Server. Create a ConfigMap containing the certificate for on-premises WebSphere and import this on to your cluster, so that the cloud native Netcool Operations Insight components can access WebSphere. 1. Extract an intermediate chain of trust certificate from WebSphere. 2. Create a ConfigMap containing the WebSphere certificate.

kubectl create configmap configmap-name --from-file=full-path-to-certificate-file

Where • configmap-name is the name of the ConfigMap to be created, for example users-certificate. The operator property dash.trustedCAConfigMapName must match this value. For more information, see “Hybrid operator properties” on page 162. • full-path-to-certificate-file is the filename and path for the WebSphere certificate extracted in step 1.

Creation of secrets for access to on-premises services Follow these steps to give the cloud native Netcool Operations Insight components on Red Hat OpenShift access to the on-premises Operations Management components. These secrets must be created manually. 1. Create a secret to enable cloud native Netcool Operations Insight components to access your on- premises Operations Management ObjectServer.

kubectl create secret generic custom-resource-omni-secret --from- literal=OMNIBUS_ROOT_PASSWORD=omni_password --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install).

150 IBM Netcool Operations Insight: Integration Guide • namespace is the name of the namespace into which you want to install the cloud native components. • omni_password is the root password for the on-premises Netcool/OMNIbus that you want to connect to. 2. Create a secret to enable the cloud native Netcool Operations Insight components to access the on- premises Operations Management IBM Netcool/OMNIbus Web GUI.

kubectl create secret generic custom_resource-was-secret --from- literal=WAS_PASSWORD=was_password --namespace namespace

Where • namespace is the name of the namespace into which you want to install the cloud native components. • was_password is the WebSphere admin password for the on-premises Web GUI that you want to connect to. 3. Create a secret to enable on-premises Operations Management to authenticate with cloud native Netcool Operations Insight components by using OAuth.

kubectl create secret generic custom_resource-was-oauth-cnea-secrets --from-literal=client- id=client_id --from-literal=client-secret=client_secret --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. • client_id is the name to use as your client-id. This value is used later in “Installing the integration kit” on page 167 when you configure your on-premises Operations Management installation to connect to your cloud native components. • client_secret is the name to use for your client secret. This value is used later in “Installing the integration kit” on page 167 when you configure your on-premises Operations Management installation to connect to your cloud native components. 4. Create a secret to enable communication between the OMNIbus component of your on-premises Operations Management installation and the cloud native Netcool Operations Insight components. This secret is created differently depending on whether you require an SSL or non-SSL connection. Choose from the options below. If you do not require an SSL connection, then follow this step to configure authentication: a. Run the following command to configure a non-SSL connection between the OMNIbus component of your on-premises Operations Management installation and the cloud native Netcool Operations Insight components.

kubectl create secret generic custom_resource-omni-certificate-secret --from- literal=PASSWORD=password --from-literal=ROOTCA="" --from-literal=INTERMEDIATECA="" -- namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • password is a password of your choice. • namespace is the name of the namespace into which you want to install the cloud native components.

Chapter 3. Installing Netcool Operations Insight 151 If you require an SSL connection because the OMNIbus component of your on-premises Operations Management installation uses SSL, or you want to have an SSL connection between the cloud native Netcool Operations Insight components and the on-premises Operations Management installation's ObjectServer, then follow these steps to configure authentication: a. Configure OMNIbus on your on-premises Operations Management installation to use SSL, if it is not doing so already. To check, run the command oc get secrets -n namespace and check if the secret custom_resource-omnicertificate-secret exists. If the secret does not exist and the OMNIbus components are using SSL, the following steps must be completed. b. Extract the certificate from your on-premises Operations Management installation.

$NCHOME/bin/nc_gskcmd -cert -extract -db "key_db" -pw password -label "cert_name" -target "ncomscert.arm"

Where • key_db is the name of the key database file. • password is the password to your key database. • cert_name is the name of your certificate. c. Copy the extracted certificate, ncomscert.arm, over to the infrastructure node of your Red Hat OpenShift cluster, or to the node on your cluster where the kubectl CLI is installed. d. Create a secret for the certificate.

kubectl create secret generic custom_resource-omni-certificate-secret --from- literal=PASSWORD=password --from-file=ROOTCA=certificate --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • password is a password of your choice. • certificate is the path and filename of the certificate that was copied to your cluster in the previous step, ncomscert.arm. • namespace is the name of the namespace into which you want to install the cloud native components. Note: If the ObjectServer is not named 'AGG_V', which is the default, then you must set the global.hybrid.objectserver.config.ssl.virtualPairName parameter when you configure the installation parameters later. For more information, see “Hybrid operator properties” on page 162.

Creation of secrets for access to cloud native Netcool Operations Insight components If any of these passwords and secrets do not exist, then the installer automatically creates random passwords for the required passwords and then creates the required secrets from these passwords. After installation is successfully completed, the passwords can be extracted from the secrets. For more information, see “Retrieving passwords from secrets” on page 178. If you want to create all the required passwords and secrets manually, use the following procedure instead. 1. If passwords do not exist for the users in Table 1, then create them. All passwords must have fewer than 16 characters, and contain only alphanumeric characters. 2. Create custom_resource-couchdb-secret with the following command:

kubectl create secret generic custom_resource-couchdb-secret --from-literal=password=password --from-literal=secret=couchdb --from-literal=username=root --namespace namespace

Where

152 IBM Netcool Operations Insight: Integration Guide • password is the password for the internal couchdb. • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. 3. Create custom_resource-cassandra-auth-secret with the following command:

kubectl create secret generic custom_resource-cassandra-auth-secret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username Default is hdm. Do not use cassandra. • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. 4. Create custom_resource-ibm-redis-authsecret with the following command:

kubectl create secret generic custom_resource-ibm-redis-authsecret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username Default is redis. • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. 5. Create custom_resource-kafka-admin-secret with the following command:

kubectl create secret generic custom_resource-kafka-admin-secret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username Default is kafka. • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. 6. Create custom_resource-kafka-client-secret with the following command:

kubectl create secret generic custom_resource-kafka-client-secret --from-literal=username=username --from-literal=password=password --namespace namespace

Where • username The default is admin.

Chapter 3. Installing Netcool Operations Insight 153 • password is the password for user interface communication between pods. • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. If you want to change a password after installation, see “Changing passwords and recreating secrets” on page 179

Configuring load balancing for on-premises WebGUI/DASH nodes (HA only) Configure load balancing for the on-premises WebGUI/DASH nodes in a hybrid high availability deployment, where there is more than one on-premises WebGUI node.

About this task The on-premises WebGUI/DASH servers must be set up with load balancing by using an HTTP Server that balances the UI load. If you do not already have load balancing configured for your on-premises WebGUI/ DASH nodes, then follow these steps.

Procedure 1. Install DB2. If you do not have a DB2 instance, install it on one of your on-premises servers. For more information, see https://www.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_con_downloaddb2.html 2. Install IBM HTTP Server. If you do not have IBM HTTP Server then install it on one of your on-premises server. For more information, see https://www.ibm.com/support/knowledgecenter/SSEQTJ_8.5.5/ com.ibm.websphere.ihs.doc/ihs/welc6miginstallihsdist.html 3. Configure WebSphere certificates. a. Generate a Certificate Signing Request (CSR) on the second WebSphere node. On WebSphere node 2, use the WebSphere Administration console to navigate to Security - SSL certificate and key management - Key stores and certificates - NodeDefaultKeyStore - Personal certificate requests - New, and fill in the required fields, using the DASH domain name as the common name parameter value. b. Copy the generated certificate request file from Websphere node 2 to Websphere node 1.

c. Generate a signed server certificate. Use the intermediate certificate on WebSphere node 1 to sign the CSR that was created in the previous step. For example

openssl ca -config /opt/dash/IBM/ca-certs/intermediate/openssl.cnf -extensions server_cert -days 375 -notext -md sha256 -in /opt/IBM/HTTPServer/bin/http-server-lb.csr - out /opt/IBM/HTTPServer/bin/http-server-lb1.crt

d. Import the signed server certificate into WebSphere node 2's keystore. Copy the signed server certificate created in the previous step back to WebSphere node 2, and import it into WebSphere node 2's keystore, and then use the WebSphere Administration console to navigate to Security - SSL certificate and key management - Key stores and certificates - NodeDefaultKeyStore - Personal certificates - Recieve from certificate authority e. Add WebSphere node 1 intermediate CA certificate to WebSphere node 2's keystore. Copy the WebSphere node 1 intermediate certificate to WebSphere node 2, and then use WebSphere node 2's WebSphere Administration console to add this certificate to WebSphere node 2's keystore. Navigate to Security - SSL certificate and key management - Key stores and

154 IBM Netcool Operations Insight: Integration Guide certificates - NodeDefaultKeyStore - Signer certificates -> Add and then add the intermediate CA certificate. f. Add WebSphere node 1 root CA certificate to WebSphere node 2's truststore. Copy the WebSphere node 1 root certificate to WebSphere node 2, and then use WebSphere node 2's WebSphere Administration console to add this certificate to WebSphere node 2''s keystore. Navigate to Security - SSL certificate and key management - Key stores and certificates - NodeDefaultKeyStore - Signer certificates -> Add and then add the root CA certificate. g. Update WebSphere node 2 to use the new certificate. Use WebSphere node 2'S WebSphere Administration console and navigate to SSL certificate and key management - Manage endpoint security configurations - JazzSMNode01. For inbound connections, set the Certificate alias in key store to the certificate that was added to the keystore in the previous step. 4. Configure single sign-on for the DASH servers. Export the LPTA keys from the first DASH server, and then import them on to the other DASH server(s). For more information, see https://www.ibm.com/support/knowledgecenter/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/ web_con_ssoprocedures.html. 5. Create a database to manage load balancing, and then enable WebSphere to connect to the database. Create a database in DB2, and then from DASH, click Console Settings > WebSphere Administrative console > Launch WebSphere Administrative console, and then Resources > JDBC > JDBC providers and add an entry for Db2. For more information, see https://www.ibm.com/ support/knowledgecenter/SSEKCU_1.1.3.0/com.ibm.psc.doc/tip_original/ttip_config_ha_setup.html. 6. Create a WebSphere datasource to enable connection to the load-balancing Db2 database: In DASH, click Console Settings > WebSphere Administrative console > WebSphere Administrative console, and then Resources > JDBC > Data Sources and add an entry for the load balancing Db2 database that you created. 7. Create a key database for IBM HTTP Server to store keys and certificates in.

• cd /space/ibm/netcool/httpserver/bin ./gskcapicmd -keydb -create -db ~/http-server-keys -pw WebAS -stash

For more information, see https://www.ibm.com/support/knowledgecenter/SS7K4U_8.5.5/ com.ibm.websphere.ihs.doc/ihs/tihs_createkeydb390.html • Add root CA cert to the IBM HTTP Server keystore.

./gskcmd -cert -add -db ~/http-server-keys.kdb -pw WebAS -file ~/root-ca.pem -label root- ca

• Add intermediate cert to the IBM HTTP Server keystore.

./gskcmd -cert -add -db ~/http-server-keys.kdb -pw WebAS -file ~/intermediate-ca.pem - label intermediate

• Create CSR

./gskcapicmd -certreq -create -db ~/http-server-keys.kdb -pw WebAS -dn "C=GB,ST=England,O=IBM,OU=HDM,CN=noi-on-prem1.fyre.ibm.com" -size 2048 -file ~/http- server-lb.csr -label http-server-lb

• Sign the CSR with your intermediate cert to create http-server-lb.crt.

openssl ca -config /opt/dash/IBM/ca-certs/intermediate/openssl.cnf -extensions server_cert -days 375 -notext -md sha256 -in /opt/IBM/HTTPServer/bin/http-server-lb.csr - out /opt/IBM/HTTPServer/bin/http-server-lb1.crt

• Add the signed cert to the IBM HTTP Server keystore.

Chapter 3. Installing Netcool Operations Insight 155 ./gskcmd -cert -receive -file ~/http-server-lb.crt -db ~/http-server-keys.kdb -pw WebAS

• Assign the root CA certificate to be the default certificate.https://www.ibm.com/support/ knowledgecenter/SSEQTJ_8.5.5/com.ibm.websphere.ihs.doc/ihs/tihs_selfsigned.html Alternatively, the user can use the ikeyman utility provided with IBM HTTP server to assign the root CA certificate as the default 8. Configure SSL for IBM HTTP server Locate the line # End of example SSL configuration in HTTP_server_install_dir/ conf/httpd.conf, and then append the following, ensuring that your KeyFile and SSLStashfile values reference the key database file that you created for IBM HTTP Server.

# End of example SSL configuration LoadModule ibm_ssl_module modules/mod_ibm_ssl.so Listen 443 SSLEnable SSLProtocolDisable SSLv2 ErrorLog "/home/test/sslerror.log" TransferLog "/home/test/sslaccess.log" KeyFile "/home/test/http-server-keys.kdb" SSLStashfile "/home/test/http-server-keys.sth" SSLDisable

9. Tell IBM HTTP Server where the plugin-cfg.xml will be Add the following to the end of HTTP_server_install_dir/conf/httpd.conf

LoadModule was_ap22_module "HTTP_server_install_dir/bin/64bits/mod_was_ap22_http.so" WebSpherePluginConfig "HTTP_server_install_dir/config/plugin-cfg.xml"

10. Configure the WAS plugin for IBM HTTP Server. Generate plugin-cfg.xml, and copy it to your IBM HTTP Server installation.

JazzSM_Profile/bin/GenPluginCfg.sh cp /space/ibm/netcool/jazz/profile/config/cells/plugin-cfg.xml HTTP_web_server_install_dir/ plugins/config/webserver1/plugin-cfg.xml

Edit plugin-cfg.xml to point to your key store and stashfile (http-server-keys.kdb and http-server-keys.sth), and add entries for each of your DASH servers. For more information, see https://www.ibm.com/support/knowledgecenter/en/SSEKCU_1.1.3.0/com.ibm.psc.doc/ tip_original/ttip_config_loadbal_plugin_cfg.html 11. Edit HTTP_web_server_install_dir/plugins/config/webserver1/plugin-cfg.xml Find the section called and append this line:

12. Start the HTTP Server

HTTP_web_server_install_dir/bin/apachectl start

13. Stop and restart the Jazz for Service Management application server

cd JazzSM_WAS_Profile/bin ./stopServer.sh server1 -username smadmin -password password ./startServer.sh server1

where JazzSM_WAS_Profile is the location of the application server profile that is used for Jazz for Service Management. This is usually /opt/IBM/JazzSM/profile. 14. When you have load balancing correctly configured, you are able to access DASH without providing a port in the URL, for example: https://http_server_hostname/ibm/console.

156 IBM Netcool Operations Insight: Integration Guide Setting up persistence for the OAuth service (HA only) Use this topic to create a database to persist OAuth tokens and clients for use by all the WebGUI nodes.

Procedure 1. Create DB2 database Follow the instructions in this link: https://www.ibm.com/support/knowledgecenter/en/ SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/cwbs_oauthdb2.html . You can use the same Db2 instance that you use for your load-balancing database. When instructed to create a client in the Db2 database, use the following values:

INSERT INTO OAuthDBSchema.OAUTH20CLIENTCONFIG ( COMPONENTID, CLIENTID, CLIENTSECRET, DISPLAYNAME, REDIRECTURI, ENABLED ) VALUES ( 'NetcoolOAuthProvider', 'client_id', 'client_secret', 'My Client', 'redirect_url', 1 )

Where • client_id is the value of client-id in custom-resource-was-oauth-cnea-secrets. For more information, see “Configuring authentication” on page 149. • client_secret is the value of client-secret in custom-resource-was-oauth-cnea-secrets. For more information, see “Configuring authentication” on page 149. • redirect_url is the value that you specified for Redirect URL when you installed the integration kit. For more information, see “Installing the integration kit” on page 167. 2. Create a JDBC entry to enable connection to your Db2 instance from WebSphere®. In DASH, click Console Settings > WebSphere Administrative console > WebSphere Administrative console, and then Resources > JDBC > JDBC providers and add an entry for Db2. 3. Create a WebSphere datasource that has the credentials to connect to the OAuth Db2 database: In DASH, click Console Settings > WebSphere Administrative console > WebSphere Administrative console, and then Resources > JDBC > Data Sources and add an entry for the OAuth Db2 database that you created. This datasource must have a different name to the datasource created for the load- balancing feature. jdbc/oauthProvider is the suggested value. The value of JNDI name for the datasource must match the value of the oauthjdbc.JDBCProvider parameter in NetcoolOAuthProvider.xml.

Installing Follow these instructions to deploy a hybrid Netcool Operations Insight solution.

Installing cloud native components on Red Hat OpenShift To create a hybrid installation, you must install the cloud native Netcool Operations Insight components on Red Hat OpenShift, and configure them to access your on-premises ObjectServer and on-premises

Chapter 3. Installing Netcool Operations Insight 157 Web GUI. You can install the cloud native Netcool Operations Insight components using the command line or the user interface.

Installing cloud native components (UI) Install the cloud native Netcool Operations Insight components using the Red Hat OpenShift Operator Lifecycle Manager (OLM) console.

Before you begin The Netcool Operations Insight operator needs cluster scope permissions. It requires Role Based Access Control authorization at a cluster level since it deploys and modifies Custom Resource Definitions and cluster roles.

Procedure Create a Catalog source 1. Select the menu navigation Administration > Cluster Settings within the Red Hat OpenShift console (version 4.3 or higher). 2. Select OperatorHub configuration resource under the Global Configurations tab. 3. Click the Create Catalog Source button under the Sources tab. 4. Provide the catalog source name and the image URL, docker.io/ibmcom/noi-operator- catalog:1.0.0-20200620093846 and select the Create button. 5. The catalog source appears. Refresh the screen after a few minutes to ensure that the number of operators count became 1. Create the Netcool Operations Insight Operator 6. Select the menu navigation OperatorHub and search for the Netcool Operations Insight operator. Select the Install button. 7. Select a namespace to install the operator. Do not use namespaces that are owned by Kubernetes or OpenShift, such as kube-system or default. If you don't already have a project, create a project under the navigation menu Home > Projects. Note: If you use a OpenShift V4.3 environment and you create operators by selecting Approval Strategy > Manual, the operator is stuck in the Requires approval state. If you click Require Approval, you get a 404 error. 8. Select the Install button. 9. Under the navigation menu Operators > Installed Operators, view the Netcool Operations Insight operator. It takes a few minutes to install. Ensure that the status of the installed Netcool Operations Insight operator page is Succeeded. 10. Determine whether you want to install Netcool Operations Insight on the cloud or use a hybrid architecture. Select the Create Instance link under the correct custom resource. 11. Use the YAML or the form view to configure the properties for theNetcool Operations Insight deployment - if you are using Red Hat OpenShift V4.4.5 or earlier, then you cannot use the form view and you must use the CLI to directly edit the .yaml file. For more information about configurable properties for a hybrid deployment, see “Hybrid operator properties” on page 162. Note: Changing the instances in the form view is not supported. 12. Select the Create button. 13. Under the All Instances tab, a Netcool Operations Insight formation and Netcool Operations Insight Cloud or Hybrid instance appears. View the status of each of the updates on the installation. When the instances state shows OK, the deployment is ready. Note: Changing an existing deployment from a Trial deployment type to a Production deployment type is not supported. Note: The topology management capability is enabled by default. If you want to disable it, go to Operators > Installed Operators and click the Netcool Operations Insight operator. Then click

158 IBM Netcool Operations Insight: Integration Guide Create Instance on the Cloud deployment or Hybrid deployment tab and Edit form on the upper right corner. Finally, go to the Topology section and click the toggle switch under Topology Enabled. Note: If you update custom secrets in the OLM console, crypto key is corrupted and the command to encrypt passwords does not work. Only update custom secrets with the CLI. For more information, see “Configuring authentication” on page 149 and “Installing operators with the CLI using PPA” on page 120. For more information about storing a certificate as a secret, see https://www.ibm.com/ support/knowledgecenter/SS9LQB_1.1.8/LoadingData/t_asm_obs_configuringsecurity.html

What to do next If you want to enable or disable a feature after your installation, you can edit the Netcool Operations Insight instance by running the command:

oc edit noi

Where is the name of the instance you want to edit. You can then select enable or disable the observer. Note: When you disable features at post install, the resource is not automatically deleted. To find out if the feature is deleted or not you need to check the operator log.

Installing cloud native components (CLI) Follow these instructions to install only the cloud native Netcool Operations Insight components, as part of a hybrid deployment.

Before you begin • To push the images to the image registry, the script install-noi-archive.sh requires one of the below: – Podman [Recommended] (https://podman.io/getting-started/) – Docker (https://www.docker.com/get-started) – Skopeo (https://github.com/containers/skopeo) • To validate the image archive, the script install-noi-archive.sh requires shasum (RHEL 7/8: yum install -y perl-Digest-SHA ) • To unpack images and push to image registry install-noi-archive.sh requires 150 Gi free storage. • The operator has cluster scope permissions. It requires role-based access control (RBAC) authorization at a cluster level because it deploys and modifies Custom Resource Definitions (CRDs) and cluster roles.

Procedure 1. Complete the following steps to verify that your IBM Passport Advantage software download is valid and has been signed by IBM: a) Untar or unzip the download so that you have five files: • *.tar • *-public-key • *-public-key-ocsp • *-public-key-cops-intermediate • *.sig file b) Ensure that you have OpenSSL installed and then issue the following command using the signature and public key files:

openssl dgst -sha256 -verify -signature

Example command:

Chapter 3. Installing Netcool Operations Insight 159 openssl dgst -sha256 -verify noi-public-key -signature test.tar.sig test.tar

c) If the file has been signed by IBM, the openssl command returns the verified OK message on the command line. 2. Run the following command to verify that the certificate used to sign your Passport Advantage download is valid and verify its ownership by IBM:

openssl x509 -inform pem -in -noout -subject -issuer -startdate - enddate

Example command:

openssl x509 -inform pem -in noi-public-key-ocsp -noout -subject -issuer -startdate -enddate

This command displays the certificate issuer, the owner, as well as the certificate validity dates. 3. Run the following command to verify with the Digicert Certificate Authority whether the certificate is still valid:

openssl ocsp -no_nonce -issuer -cert -VAfile -text -url http://ocsp.digicert.com-respout ocsptest

Example command:

openssl ocsp -no_nonce -issuer noi-public-key-ocsp-intermediate -cert noi-public_key_ocsp - VAfile noi-public_key_ocsp_intermediate -text -url http://ocsp.digicert.com

This command connects to the Digicert Certificate Authority and verifies whether the certificate used to create the keys is still valid and in good standing. 4. Extract the downloaded file into a local directory.

tar -xvf ibm-netcool-prod-1.6.1.tar

5. Run the installation script from within the archive directory:

./install-noi-archive.sh --help This script installs images required by Netcool Operations Insight v1.6.1 Usage: ./install-noi-archive.sh [-n, --namespace] : namespace [-u, --user] : user [-p, --password] : password [-d, --use-docker] : use docker command [-o, --use-podman] : use podman command [-s, --use-skopeo] : use skopeo command [-r, --registry] : image registry [-a, --validate-archive] : validate archive [-h, --help]

6. Validate the install archive by running the command:

./install-noi-archive.sh --validate-archive

7. Install the images to an image registry by running the command:

./install-noi-archive.sh --user --password --namespace -- use-podman Install Netcool Operations Insight v1.6.1

registry...: image-registry.openshift-image-registry.svc namespace..: user...... : command....:

Enter y to continue:

It will take between 30 and 60 minutes to complete the install. Note: The defaults username is kubeadmin, the default password is the values from running /root/ auth/kubeadmin-password and the default namespace is default.

160 IBM Netcool Operations Insight: Integration Guide Note: Podman is recommended for Red Hat OpenShift systems. 8. Unpack the Netcool Operations Insight operator archive by running the commands:

tar -xvf noi-operator-1.0.0.tgz

cd noi-operator-1.0.0

./install.sh --help This script installs the Netcool Operations Insight operator Usage: ./install.sh [-n, --namespace] : namespace [-u, --user] : user [-p, --password] : password [-r, --registry] : image registry [-k, --kubectl] : use kubectl [-h, --help]

9. Install the Netcool Operations Insight operator Kubernetes artifacts by using the command:

./install.sh --user --password --namespace

Install Netcool Operations Insight operator v1.6.1

registry...: image-registry.openshift-image-registry.svc:5000 namespace..: user...... : command....:

Enter y to continue:

Note: The default namespace is default, the default user is the value obtained by running oc whoami, and the default password is the value obtained from running oc whoami -t. 10. Configure your deployment for either a full cloud or hybrid installation. a) For a hybrid installation, edit the properties in deploy/crds/ noi.ibm.com_noihybrids_cr.yaml with configuration values suitable for your deployment. For more information, see “Hybrid operator properties” on page 162. Then, run the following command:

oc apply -f deploy/crds/noi.ibm.com_noihybrids_cr.yaml

Note: Changing an existing deployment from a trial deployment type to a production deployment type is not supported. Note: The topology management capability is enabled by default. If you want to disable it, open your custom resource file and edit it by setting topology.enabled:false. The file is deploy/crds/ noi.ibm.com_nois_cr.yaml for a Red Hat OpenShift deployment and deploy/crds/ noi.ibm.com_noihybrids_cr.yaml for a hybrid deployment.

What to do next If you want to enable or disable a feature after your installation, you can edit the Netcool Operations Insight instance by running the command:

oc edit noihybrid

Where is the name of the instance you want to edit. You can then select enable or disable the observer. Note: When you disable features at post install, the resource is not automatically deleted. To find out if the feature is deleted or not you need to check the operator log.

Chapter 3. Installing Netcool Operations Insight 161 Hybrid operator properties Find out about the operator properties that can be configured for your hybrid installation. Table 32. Hybrid deployment properties Property Description clusterDomain Use the fully qualified domain name (FQDN) to specify deploymentType Deployment type (trial or production) entitlementSecret Entitlement secret to pull images license.accept Agreement to license advanced.antiAffinity To prevent primary and backup server pods from being installed on the same worker node, please select this option. advanced.imagePullPolicy The default pull policy is IfNotPresent, which causes the Kubelet to skip pulling an image that already exists. advanced.imagePullRepository Docker registry that all component images are pulled dash.trustedCAConfigMapName Config map containing CA certificates to be trusted dash.url URL of the DASH server. i.e. 'protocol:// fully.qualified.domain.name:port' dash.username Username for connecting to on-premise DASH. LDAP properties - http://ibm.biz/install_noi_icp ldap.baseDN Configure the LDAP base entry by specifying the base distinguished name (DN). ldap.bindDN Configure LDAP bind user identity by specifying the bind distinguished name (bind DN). ldap.mode Choose (standalone) for a built-in LDAP server or (proxy) and connect to an external organization LDAP server. See http://ibm.biz/install_noi_icp. ldap.port Configure the port of your organization's LDAP server. ldap.sslPort Configure the SSL port of your organization's LDAP server. ldap.storageClass LDAP Storage Class ldap.storageSize LDAP Storage Size ldap.suffix Configure the top entry in the LDAP directory information tree (DIT). ldap.url Configure the URL of your organization's LDAP server. objectServer.backupHost Hostname of the backup ObjectServer. objectServer.backupPort Port number of the backup ObjectServer. objectServer.deployPhase This setting determines when the OMNIbus CNEA schema is deployed.

162 IBM Netcool Operations Insight: Integration Guide Table 32. Hybrid deployment properties (continued) Property Description objectServer.primaryHost Hostname of the primary ObjectServer. objectServer.primaryPort Port number of the primary ObjectServer. objectServer.sslRootCAName This is used to specify the CN name for the CA certificate objectServer.sslVirtualPairName Only needed when setting up an SSL connection to the ObjectServer pair objectServer.username Username for connecting to on-premise ObjectServer. persistence.enabled Enable persistence storage persistence.storageClassCassandraBacku Storage Class Cassandra Backup p persistence.storageClassCassandraData Storage Class Cassandra Data persistence.storageClassCouchdb Couchdb Storage Class persistence.storageClassElastic Storage Class Elasticsearch persistence.storageClassKafka Storage Class Kafka persistence.storageClassZookeeper Storage Class Zookeeper persistence.storageSizeCassandraBackup Storage Size Cassandra Backup persistence.storageSizeCassandraData Storage Size Cassandra Data persistence.storageSizeCouchdb Couchdb Storage Size persistence.storageSizeElastic Elasticsearch Storage Size persistence.storageSizeKafka Storage Size Kafka persistence.storageSizeZookeeper Storage Size Zookeeper topology.appDisco.enabled Enable Application Discovery services and its observer topology.appDisco.dburl DB2 Host URL for Application Discovery topology.appDisco.dbsecret DB2 secret for Application Discovery topology.appDisco.secure Enable secure connection to DB2 Host URL for Application Discovery topology.appDisco.certSecret This secret must contain DB2 certificate by the name tls.crt Applicable only if db.secure property is true. topology.appDisco.dsStorageClass Storage class for persistent volume claims used by all discovery servers. Applicable only if persistence.enabled is true. topology.appDisco.dsStorageSize Size of persistent volume claims used by all discovery servers. Applicable only if persistence.enabled is true.

Chapter 3. Installing Netcool Operations Insight 163 Table 32. Hybrid deployment properties (continued) Property Description topology.appDisco.dsDefaultExistingCla Name of existing persistent volume claim im (expected in same namespace) to be used by Discovery server. Applicable only if persistence.enabled is true. topology.appDisco.pssStorageClass Storage class for persistent volume claims used by primary storage server. Applicable only if persistence.enabled is true. topology.appDisco.pssStorageSize Size of persistent volume claims used by primary storage server. Applicable only if persistence.enabled is true. topology.appDisco.pssExistingClaim Name of existing persistent volume claim (expected in same namespace) to be used by primary storage server. Applicable only if persistence.enabled is true. topology.appDisco.sssStorageClass Storage class for persistent volume claims used by all secondary storage servers. Applicable only if persistence.enabled is true. topology.appDisco.sssStorageSize Size of persistent volume claims used by all secondary storage servers topology.enabled Enable Topology topology.netDisco Enable Network Discovery services and its observer topology.observers.aws Enable AWS observer topology.observers.azure Enable Azure observer topology.observers.bigfixinventory Enable Bigfixinventory observer topology.observers.cienablueplanet Enable Cienablueplanet observer topology.observers.ciscoaci Enable Ciscoaci observer topology.observers.contrail Enable Contrail observer topology.observers.dns Enable DNS observer topology.observers.docker Enable Docker observer topology.observers.dynatrace Enable Dynatrace observer topology.observers.file Enable File observer topology.observers.googlecloud Enable Googlecloud observer topology.observers.ibmcloud Enable Ibmcloud observer topology.observers.itnm Enable ITNM observer topology.observers.jenkins Enable Jenkins observer topology.observers.junipercso Enable Junipercso observer topology.observers.kubernetes Enable Kubernetes observer topology.observers.newrelic Enable Newrelic observer

164 IBM Netcool Operations Insight: Integration Guide Table 32. Hybrid deployment properties (continued) Property Description topology.observers.openstack Enable Openstack observer topology.observers.rancher Enable Rancher observer topology.observers.rest Enable REST observer topology.observers.servicenow Enable Servicenow observer topology.observers.taddm Enable TADDM observer topology.observers.vmvcenter Enable Vmvcenter observer topology.observers.vmwarensx Enable Vmwarensx observer topology.observers.zabbix Enable Zabbix observer topology.storageClassCassandraBackupTo Storage class Cassandra Backup. Production Only pology topology.storageClassCassandraDataTopo Storage class Cassandra Data. Production Only logy topology.storageClassElasticTopology Storage class Elasticsearch. Production Only topology.storageClassFileObserver File Observer Storage class. Production Only topology.storageSizeCassandraBackupTop Storage size Cassandra Backup. Production Only ology topology.storageSizeCassandraDataTopol Storage size Cassandra Data. Production Only ogy topology.storageSizeElasticTopology Storage size Elasticsearch. Production Only topology.storageSizeFileObserver File Observer Storage size. Production Only webgui.url URL of the WebGUI server. i.e. 'protocol:// fully.qualified.domain.name:port/path/to/console

Configuring on-premises Operations Management To create a hybrid installation, you must install the Netcool Hybrid Deployment Option Integration Kit on your on-premises Operations Management installation, enable scope-based grouping, configure WebSphere to use SSL_TLSv2, and configure your on-premises ObjectServer gateway mappings.

Enabling scope-based event grouping Scope-based grouping must be enabled on your on-premises ObjectServer in order for scope-based correlations to work on your hybrid deployment. Learn how to check whether scope-based grouping is enabled, and how to configure it if it is not enabled.

Procedure 1. Check whether you already have scoping enabled on your on-premises ObjectServer. Run the following command.

$OMNIHOME/bin/nco_sql -user username -password password -server server_name

Where • username is the administrative user for the ObjectServer, usually root. • password is the password for the administrative user. • server_name is the name of your ObjectServer.

Chapter 3. Installing Netcool Operations Insight 165 Verify that: • A row is returned by the following query: select TriggerName from catalog.triggers where TriggerName='correlation_new_row'; • The ScopeID field is present in the alerts.status table. If you are able to verify both these conditions, then scope-based grouping is already enabled on your ObjectServer and you can skip the rest of the topic. 2. To enable scope-based grouping, use the following steps: a) Follow the instructions in this link: https://www.ibm.com/support/knowledgecenter/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/task/ omn_con_ext_installingscopebasedegrp.html b) Download the file legacy_scoping_procedures.sql from http://../..//docs/add-scoping- triggers/ . c) Run the downloaded SQL file on your primary ObjectServer to update the triggers.

$OMNIHOME/bin/nco_sql -server servername -user username -password password < legacy_scoping_procedures.sql

Where • username is the administrative user for the ObjectServer, usually root. • password is the password for the administrative user. • server_name is the name of your primary ObjectServer. d) Run the downloaded SQL file on your backup ObjectServer to update the triggers.

$OMNIHOME/bin/nco_sql -server servername -username username -password password < legacy_scoping_procedures.sql

Where • username is the administrative user for the ObjectServer, usually root. • password is the password for the administrative user. • server_name is the name of your backup ObjectServer.

Set WebSphere protocol to SSL_TLSv2 WebSphere Application Server, as part of the on-premises Operations Management installation, needs to be able to import certificates from the cloud native Netcool Operations Insight components deployment on Red Hat OpenShift. To do this, WebSphere must be configured to use SSL_TLSv2. Use this procedure to help you verify or change this setting. If you have more than one WebGUI/DASH node, then this procedure should be run on each WebGUI/DASH node.

Procedure 1. In DASH, click Console Settings > WebSphere Administrative console > WebSphere Administrative console. 2. Click Security > SSL certificate and key management, and under Related Items select SSL configurations. 3. Select the NodeDefaultSSLSettings from the list. Under Additional Properties, click Quality of protection (QoP) settings. 4. From the protocol menu, select SSL_TLSv2 if it is not already selected, and save this configuration. 5. Restart Dashboard Application Services Hub on your Operations Management on premises installation.

cd JazzSM_WAS_Profile/bin ./stopServer.sh server1 -username smadmin -password password ./startServer.sh server1

166 IBM Netcool Operations Insight: Integration Guide Installing the integration kit To create a hybrid installation, you must use IBM Installation Manager to install the Netcool Hybrid Deployment Option Integration Kit on your on-premises Operations Management installation.

Prerequisites • The primary and backup ObjectServers are both running, and are listening on external IP addresses. • Installation Manager V1.9 or later can be run in GUI mode. If you are running an older version of Installation Manager, the following log error is displayed in the Installation Manager logs:

javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure

• On-premises Operations Management must be at the same version as Netcool Operations Insight on Red Hat OpenShift. For more information, see “V1.6.1 Product and component version matrix” on page 9. • The cloud native components of Netcool Operations Insight on Red Hat OpenShift are successfully deployed.

Installing Use the following steps to configure OAuth authentication between an on-premises Operations Management installation and a deployment of cloud native Netcool Operations Insight components on Red Hat OpenShift. If you have more than one WebGUI/DASH node, then this procedure must be run on each WebGUI/DASH node. 1. Use Installation Manager to install the Netcool Hybrid Deployment Option Integration Kit. a. Start Installation Manager in GUI mode with the following commands:

cd IM_dir/eclipse ./IBMIM

where IM_dir is the Installation Manager Group installation directory, for example /home/ netcool/IBM/InstallationManager/eclipse. b. In Installation Manager, navigate to Preferences->Repositories->Add Repository, and add the location for the cloud repository that was automatically created when you installed the cloud native Netcool Operations Insight components. Use the following command on your Red Hat OpenShift infrastructure node to find the location of the repository, where namespace is the namespace in which your cloud native Netcool Operations Insight components are deployed:

oc get routes -n namespace | grep repository

This repository contains the integration kit package that is required by your Operations Management on-premises installation. A repository example name is https:// netcool.custom_resource.apps.fqdn/im/repository/repository.config.Select Apply and OK. c. From the main Installation Manager screen, select Install, and from the Install Packages window select Netcool Hybrid Deployment Option Integration Kit. d. Proceed through the windows, accept the license and the defaults, and enter the on-premises WebSphere Application Server password. e. On the window OAuth 2.0 Configuration, set Redirect URL to the URL of your cloud native Netcool Operations Insight components deployment. This URL is https://netcool.custom_resource.apps.fqdn/users// authprovider/v1/was/return Where

Chapter 3. Installing Netcool Operations Insight 167 • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • fqdn is the cluster FQDN f. On the window OAuth 2.0 Configuration, set Client ID and Client Secret to the values that were set for them in secret custom_resource-was-oauth-cnea-secrets when you installed the cloud native Netcool Operations Insight components. Retrieve these values by running the following commands on your cloud native Netcool Operations Insight components deployment.

kubectl get secret custom_resource-was-oauth-cnea-secrets -o json -n namespace| grep client-secret | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo kubectl get secret custom_resource-was-oauth-cnea-secrets -o json -n namespace | grep client-id | cut -d : -f2 | cut -d '"' -f2 | base64 -d;echo

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace in which the cloud native Netcool Operations Insight components are installed. g. Select Next and Install. 2. Restart Dashboard Application Services Hub on your Operations Management on-premises installation by using the following commands.

cd JazzSM_WAS_Profile/bin ./stopServer.sh server1 -username smadmin -password password ./startServer.sh server1

where JazzSM_WAS_Profile is the location of the application server profile that is used for Jazz for Service Management. This is usually /opt/IBM/JazzSM/profile. Note: After the Netcool Hybrid Deployment Option Integration Kit. integration kit has been installed, you will no longer be able to create Impact connections using the on-premises DASH UI. If you need to configure a new connection then you must do so using the following procedure: 1. Use the command line to configure the required connection.

JazzSM_path/ui/bin/restcli.sh putProvider -username smadmin -password password -provider "Impact_NCICLUSTER.host" -file input.txt

where • JazzSM_path is the name of the Jazz for Service Management installation, usually /opt/IBM/ JazzSM. • password is the password for the smadmin administrative user • host is the Impact server, for example test1.fyre.ibm.com • input.txt has content similar to the following (where host is the Impact server, for example test1.fyre.ibm.com)

{ "authUser": "impactuser", "authPassword": "netcool", "baseUrl": "https:\/\/test1.fyre.ibm.com:17311\/ibm\/tivoli\/rest", "datasetsUri": "\/providers\/Impact_NCICLUSTER.test1.fyre.ibm.com\/datasets", "datasourcesUri": "\/providers\/Impact_NCICLUSTER.test1.fyre.ibm.com\/datasources", "description": "Impact_NCICLUSTER", "externalProviderId": "Impact_NCICLUSTER", "id": "Impact_NCICLUSTER.test1.fyre.ibm.com", "label": "Impact_NCICLUSTER", "remote": true, "sso": false, "type": "Impact_NCICLUSTER",

168 IBM Netcool Operations Insight: Integration Guide "uri": "\/providers\/Impact_NCICLUSTER.test1.fyre.ibm.com", "useFIPS": true }

2. Restart Dashboard Application Services Hub on your Operations Management on-premises installation by using the following commands.

cd JazzSM_WAS_Profile/bin ./stopServer.sh server1 -username smadmin -password password ./startServer.sh server1

where JazzSM_WAS_Profile is the location of the application server profile that is used for Jazz for Service Management. This is usually /opt/IBM/JazzSM/profile.

Uninstalling If you want to uninstall the Netcool Hybrid Deployment Option Integration Kit, then use Installation Manager. 1. Start Installation Manager in GUI mode with the following commands:

cd IM_dir/eclipse ./IBMIM

where IM_dir is the Installation Manager Group installation directory, for example /home/netcool/IBM/ InstallationManager/eclipse. 2. From the main Installation Manager screen, select Uninstall, and from Installed Packages select Netcool Hybrid Deployment Option Integration Kit and Uninstall. 3. After the Netcool Hybrid Deployment Option Integration Kit has been uninstalled, cloud native Netcool Operations Insight components related columns, view and groups are still displayed. If you want to remove these then you must uninstall the cloud native Netcool Operations Insight components deployment, or change it's integration point. This is because the removal of the Netcool Hybrid Deployment Option Integration Kit does not remove the WebGUI console integration, which is created by the deployment of the cloud native Netcool Operations Insight components.

Upgrading The Update option in Installation Manager is not currently supported for the Netcool Hybrid Deployment Option Integration Kit. To update the Netcool Hybrid Deployment Option Integration Kit to a newer version use the Installing and Uninstalling sections above to complete the following steps: 1. Use Installation Manger to uninstall the Netcool Hybrid Deployment Option Integration Kit. 2. Use Installation Manger to install the new version of the Netcool Hybrid Deployment Option Integration Kit, ensuring that you follow the last step to restart Dashboard Application Services Hub

Update gateway settings You must update your on-premises ObjectServer gateway settings with cloud native Netcool Operations Insight components mappings to enable bi-directional data replication.

About this task Use the following steps to configure replication of cloud native Netcool Operations Insight components fields in the on-premises bi-directional aggregation ObjectServer Gateway AGG_GATE. This step must be performed as the last step of the hybrid installation to make sure that the ObjectServer schemas match in the cloud native Netcool Operations Insight components deployment and in the on-premises Operations Management installation.

Procedure 1. On the server that you installed the on-premises aggregation gateway on, edit the on-premises gateway map definition file, $NCHOME/omnibus/etc/AGG_GATE.map. Append the following to the CREATE MAPPING StatusMap entry:

Chapter 3. Installing Netcool Operations Insight 169 ############################################################################### # # CEA Cloud Event Analytics # ############################################################################### 'AsmStatusId' = '@AsmStatusId', 'CEAAsmStatusDetails' = '@CEAAsmStatusDetails', 'CEACorrelationKey' = '@CEACorrelationKey', 'CEACorrelationDetails' = '@CEACorrelationDetails', 'CEAIsSeasonal' = '@CEAIsSeasonal', 'CEASeasonalDetails' = '@CEASeasonalDetails',

2. On the server that you installed the on-premises aggregation server on, edit the on-premises gateway map definition file, $NCHOME/omnibus/etc/AGG_GATE.map. Append the following new mapping entries to the file:

############################################################################### # # CEA Cloud Event Analytics Mapping # ############################################################################### CREATE MAPPING CEAProperties ( 'Name' = '@Name' ON INSERT ONLY, 'CharValue' = '@CharValue', 'IntValue' = '@IntValue' );

CREATE MAPPING CEASiteName ( 'SiteName' = '@SiteName' ON INSERT ONLY, 'CEACorrelationKey' = '@CEACorrelationKey' ON INSERT ONLY, 'Identifier' = '@Identifier', 'CustomText' = '@CustomText', 'CustomTimestamp' = '@CustomTimestamp', 'CustomWeight' = '@CustomWeight', 'HighImpactWeight' = '@HighImpactWeight', 'HighImpactText' = '@HighImpactText', 'HighCauseWeight' = '@HighCauseWeight', 'HighCauseText' = '@HighCauseText' );

CREATE MAPPING CEACKey ( 'CEACorrelationKey' = '@CEACorrelationKey' ON INSERT ONLY, 'LastOccurrence' = '@LastOccurrence', 'Identifier' = '@Identifier', 'ExpireTime' = '@ExpireTime', 'CustomText' = '@CustomText', 'CustomTimestamp' = '@CustomTimestamp', 'CustomWeight' = '@CustomWeight', 'HighImpactWeight' = '@HighImpactWeight', 'HighImpactText' = '@HighImpactText', 'HighCauseWeight' = '@HighCauseWeight', 'HighCauseText' = '@HighCauseText' );

CREATE MAPPING CEACKeyAliasMembers ( 'CEACorrelationKey' = '@CEACorrelationKey' ON INSERT ONLY, 'CorrelationKeyAlias' = '@CorrelationKeyAlias' );

CREATE MAPPING CEAPriorityChildren ( 'Identifier' = '@Identifier' ON INSERT ONLY, 'CustomText' = '@CustomText', 'CustomTimestamp' = '@CustomTimestamp', 'CustomWeight' = '@CustomWeight', 'HighImpactWeight' = '@HighImpactWeight', 'HighImpactText' = '@HighImpactText', 'HighCauseWeight' = '@HighCauseWeight', 'HighCauseText' = '@HighCauseText' );

3. On the server that you installed the on-premises aggregation server on, edit the gateway table replication definition file $NCHOME/omnibus/etc/AGG_GATE.tblrep.def. Append the following replication statements to the file:

170 IBM Netcool Operations Insight: Integration Guide ############################################################################### # # # CEA Cloud Event Analytics Replication Definition # ############################################################################### REPLICATE ALL FROM TABLE 'master.cea_properties' USING map 'CEAProperties';

REPLICATE ALL FROM TABLE 'master.cea_sitename' USING map 'CEASiteName';

REPLICATE ALL FROM TABLE 'master.cea_ckey' USING map 'CEACKey';

REPLICATE ALL FROM TABLE 'master.cea_ckey_alias_members' USING map 'CEACKeyAliasMembers';

REPLICATE ALL FROM TABLE 'master.cea_priority_children' USING map 'CEAPriorityChildren';

4. Restart the aggregation gateway for the changes to take effect. a) Stop the Gateway by killing it:

kill -9 $(ps -ef | grep nco_g_objserv_bi | grep .props | awk -F ' ' {'print $2'})

b) Start the Gateway with the following command:

$NCHOME/omnibus/bin/nco_g_objserv_bi -propsfile $NCHOME/omnibus/etc/AGG_GATE.props &

For more information, see https://www.ibm.com/support/knowledgecenter/en/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/probegtwy/concept/ omn_gtw_runninggtwys.html .

Installing the connection layer operator with the Operator Lifecycle Management console Learn how to install the connection layer operator with the Operator Lifecycle Management (OLM) console. Each connection layer operator establishes a connection to an additional ObjectServer in your hybrid environment.

Before you begin You must first deploy IBM Netcool Operations Insight on Red Hat OpenShift in your cloud or hybrid environment. For more information, see “Installing cloud native components (UI)” on page 158 and “Installing operators with the Operator Lifecycle Management console” on page 119. This installation connects an ObjectServer aggregation pair to a Netcool Operations Insight on OpenShift instance. Before you deploy the connection layer, create two secrets: 1. Create a secret to enable cloud native Netcool Operations Insight components to access your on- premises Operations Management ObjectServer.

kubectl create secret generic custom-resource-omni-secret --from- literal=OMNIBUS_ROOT_PASSWORD=omni_password --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. • omni_password is the root password for the on-premises Netcool/OMNIbus that you want to connect to. 2. Create a secret to enable SSL communication between the OMNIbus component of your on-premises Operations Management installation and the cloud native Netcool Operations Insight components. If

Chapter 3. Installing Netcool Operations Insight 171 you do not require an SSL connection, create the secret with blank entries. Complete the following steps to configure authentication: a. Configure OMNIbus on your on-premises Operations Management installation to use SSL, if it is not doing so already. To check, run the command oc get secrets -n namespace and check if the secret custom_resource-omnicertificate-secret exists. If the secret does not exist and the OMNIbus components are using SSL, the following steps must be completed. b. Extract the certificate from your on-premises Operations Management installation.

$NCHOME/bin/nc_gskcmd -cert -extract -db "key_db" -pw password -label "cert_name" -target "ncomscert.arm"

Where • key_db is the name of the key database file. • password is the password to your key database. • cert_name is the name of your certificate. c. Copy the extracted certificate, ncomscert.arm, over to the infrastructure node of your Red Hat OpenShift cluster, or to the node on your cluster where the kubectl CLI is installed. d. Create a secret for the certificate.

kubectl create secret generic custom_resource-omni-certificate-secret --from- literal=PASSWORD=password --from-file=ROOTCA=certificate --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • password is a password of your choice. • certificate is the path and filename of the certificate that was copied to your cluster in the previous step, ncomscert.arm. • namespace is the name of the namespace into which you want to install the cloud native components. Note: If the ObjectServer is not named 'AGG_V', which is the default, then you must set the global.hybrid.objectserver.config.ssl.virtualPairName parameter when you configure the installation parameters later. For more information, see “Hybrid operator properties” on page 162.

About this task Learn about the properties that can be specified for each connection layer: Table 33. Connection layer properties Property Description noiReleaseName Provide the release name to be associated with the ObjectServer properties. The noiReleaseName property is the release name of the hybrid or cloud instance that must be connected with the ObjectServer aggregation pair. objectServer.backupHost Hostname of the backup ObjectServer objectServer.backupPort Port number of the backup ObjectServer objectServer.deployPhase This setting determines when the OMNIbus Netcool Operations Insight on OpenShift schema is deployed objectServer.primaryHost Hostname of the primary ObjectServer

172 IBM Netcool Operations Insight: Integration Guide Table 33. Connection layer properties (continued) Property Description objectServer.primaryPort Port number of the primary ObjectServer objectServer.sslRootCAName This property is used to specify the common name (CN) name for the certificate authority (CA) certificate objectServer.sslVirtualPairName This property is only needed when setting up an SSL connection objectServer.username User name for connecting to an on-premises ObjectServer

The operator has cluster scope permissions. It requires role-based access control (RBAC) authorization at a cluster level because it deploys and modifies Custom Resource Definitions (CRDs) and cluster roles.

Procedure 1. Log in to the OLM console with a URL of the following format:

https://console-openshift-console.apps./

Where is the host name of the master node. 2. To install a connection layer on your cloud architecture, select the Create Instance link under the hybrid or cloud custom resource. 3. Note: Set the objectServer.deployPhase property to install and do not change. Use the YAML or Form view and provide the required values to install a connection layer. For more information, see Table 33 on page 172. 4. Select the Create button and specify a release name for the connection layer. 5. Under the All Instances tab, a connection layer instance appears. View the status for updates on the installation. When the instance state shows OK then the connection layer is fully deployed.

What to do next Deploy a connection layer for each separate aggregation pair that you want to connect to a single Netcool Operations Insight on OpenShift instance.

Installing the connection layer operator with the CLI Learn how to install the connection layer operator with the command line interface (CLI). Each connection layer operator establishes a connection to an additional ObjectServer in your hybrid environment.

Before you begin You must first deploy IBM Netcool Operations Insight on Red Hat OpenShift in your cloud or hybrid environment. For more information, see “Installing operators with the CLI using PPA” on page 120 and “Installing cloud native components (CLI)” on page 159. This installation connects an ObjectServer aggregation pair to a Netcool Operations Insight on OpenShift instance. Before you deploy the connection layer, create two secrets: 1. Create a secret to enable cloud native Netcool Operations Insight components to access your on- premises Operations Management ObjectServer.

kubectl create secret generic custom-resource-omni-secret --from- literal=OMNIBUS_ROOT_PASSWORD=omni_password --namespace namespace

Where

Chapter 3. Installing Netcool Operations Insight 173 • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • namespace is the name of the namespace into which you want to install the cloud native components. • omni_password is the root password for the on-premises Netcool/OMNIbus that you want to connect to. 2. Create a secret to enable SSL communication between the OMNIbus component of your on-premises Operations Management installation and the cloud native Netcool Operations Insight components. If you do not require an SSL connection, create the secret with blank entries. Complete the following steps to configure authentication: a. Configure OMNIbus on your on-premises Operations Management installation to use SSL, if it is not doing so already. To check, run the command oc get secrets -n namespace and check if the secret custom_resource-omnicertificate-secret exists. If the secret does not exist and the OMNIbus components are using SSL, the following steps must be completed. b. Extract the certificate from your on-premises Operations Management installation.

$NCHOME/bin/nc_gskcmd -cert -extract -db "key_db" -pw password -label "cert_name" -target "ncomscert.arm"

Where • key_db is the name of the key database file. • password is the password to your key database. • cert_name is the name of your certificate. c. Copy the extracted certificate, ncomscert.arm, over to the infrastructure node of your Red Hat OpenShift cluster, or to the node on your cluster where the kubectl CLI is installed. d. Create a secret for the certificate.

kubectl create secret generic custom_resource-omni-certificate-secret --from- literal=PASSWORD=password --from-file=ROOTCA=certificate --namespace namespace

Where • custom_resource is the name that you will use for your cloud native Netcool Operations Insight components deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). • password is a password of your choice. • certificate is the path and filename of the certificate that was copied to your cluster in the previous step, ncomscert.arm. • namespace is the name of the namespace into which you want to install the cloud native components. Note: If the ObjectServer is not named 'AGG_V', which is the default, then you must set the global.hybrid.objectserver.config.ssl.virtualPairName parameter when you configure the installation parameters later. For more information, see “Hybrid operator properties” on page 162.

About this task Learn about the properties that can be specified for each connection layer:

174 IBM Netcool Operations Insight: Integration Guide Table 34. Connection layer properties Property Description noiReleaseName Provide the release name to be associated with the ObjectServer properties. The noiReleaseName property is the release name of the hybrid or cloud instance that must be connected with the ObjectServer aggregation pair. objectServer.backupHost Hostname of the backup ObjectServer objectServer.backupPort Port number of the backup ObjectServer objectServer.deployPhase This setting determines when the OMNIbus Netcool Operations Insight on OpenShift schema is deployed objectServer.primaryHost Hostname of the primary ObjectServer objectServer.primaryPort Port number of the primary ObjectServer objectServer.sslRootCAName This property is used to specify the common name (CN) name for the certificate authority (CA) certificate objectServer.sslVirtualPairName This property is only needed when setting up an SSL connection objectServer.username User name for connecting to an on-premises ObjectServer

The operator has cluster scope permissions. It requires role-based access control (RBAC) authorization at a cluster level because it deploys and modifies Custom Resource Definitions (CRDs) and cluster roles. Create and deploy a custom resource for the connection layer by completing the following steps:

Procedure 1. Note: Specify a unique release name for each connection layer. For each connection layer, create the custom resource by editing the parameters in the deploy/ crds/.yamlfile, where is the name of your custom resource YAML file for your cloud or hybrid deployment. Specify the connection layer release name and the ObjectServer details. For more information, see Table 34 on page 175. 2. Run the following command:

kubectl apply -f deploy/crds/.yaml

What to do next Deploy a connection layer for each separate aggregation pair that you want to connect to a single Netcool Operations Insight on OpenShift instance.

Completing hybrid HA setup (HA only) Use this topic to configure the WebGUI nodes to use the OAuth database, and restart the required services.

About this task

Procedure 1. Configure WebGUI with OAuth database.

Chapter 3. Installing Netcool Operations Insight 175 On each WebGUI node, update the file $JazzSM_Profile_Home/config/cells/ JazzSMNode01Cell/oauth20/NetcoolOAuthProvider.xml with the changes that are made to the OAuthConfigSample.xml file here: https://www.ibm.com/support/knowledgecenter/en/ SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/cwbs_oauthsql.html . The value of the oauthjdbc.JDBCProvider parameter in NetcoolOAuthProvider.xml must match the JNDI name used for the datasource in “Setting up persistence for the OAuth service (HA only)” on page 157. To avoid potential conflict, move the base.clients.xml file with the following command:

cd $JazzSM_Profile_Home/config/cells/JazzSMNode01Cell/oauth20 mv base.clients.xml base.clients.xml.backup

Note: Complete this step each time that the Netcool Hybrid Deployment Option Integration Kit is redeployed. 2. Restart on-premises Dashboard Application Services Hub instances. Restart all Dashboard Application Services Hub instances on your Operations Management on- premises installation by using the following commands:

cd JazzSM_WAS_Profile/bin ./stopServer.sh server1 -username smadmin -password password ./startServer.sh server1

where JazzSM_WAS_Profile is the location of the application server profile that is used for Jazz for Service Management. This is usually /opt/IBM/JazzSM/profile. 3. Restart the common-ui and cem-users pods on your cloud native Netcool Operations Insight components deployment. a. Find the names of the common-ui pod and cem-users pods on your cloud native Netcool Operations Insight components deployment with the following command:

oc get pod | grep common-ui oc get pod | grep cem-users

b. Restart these pods with the following command:

oc delete pod pod_name

where pod_name is the name of the pod to be restarted. 4. Complete the post installation steps in “Post installation setup and verification” on page 176.

Post-installation tasks Perform the following tasks to verify and configure your cloud native Netcool Operations Insight components deployment. Enable the launch-in-context menu to start manual or semi-automated runbooks from events for your hybrid IBM Netcool Operations Insight on Red Hat OpenShift deployment. For more information, see “Installing Netcool/Impact to run the trigger service” on page 249.

Post installation setup and verification Follow these steps to add the required roles to your user, and to verify that your hybrid installation is working.

Procedure 1. Log in to your on premises DASH web application. If you have used the default root location of /ibm/ console and the default secure port of 16311, then the URL will be in the following format https:// :16311/ibm/console.jsp. For more information, see “Getting started with Netcool Operations Insight” on page 351. 2. Trust the certificate for the cloud native Netcool Operations Insight components deployment server.

176 IBM Netcool Operations Insight: Integration Guide The first time that you log in to your Operations Management installation after configuring your hybrid deployment, you will be presented with an additional security screen. You will need to trust the certificate from the cloud native Netcool Operations Insight components server to enable on-premises Dashboard Application Services Hub to authenticate with it. 3. Add additional roles for your user. a) Select Console Settings->User Roles->Search and select your user from those listed in Available Users. b) Add these roles to each user who requires access to the UI: noi_lead, noi_viewer, and noi_administrator c) Log out and back in again. 4. Go to Console Settings -> Console Integration and verify that an integration with a Console Integration Name of Cloud Analytics is present. 5. (Optional) Verify that the Insights->Cloud Analytics->Manage Policies option is showing on the UI. 6. (Optional) Verify that OAuth authentication between the cloud native Netcool Operations Insight components deployment and Operations Management is working. a) From the cloud native Netcool Operations Insight components deployment cluster, issue a CURL command to get an access token from your on-premises installation.

curl -k -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -d "grant_type=password&client_id=client-id&client_secret=client- secret&username=user&password=pass" https://host.domain:16311/oauth2/endpoint/ NetcoolOAuthProvider/token | sed -e 's/[{}\,]/\n/g'

where • client_id is the value of client_id in secret custom_resource-was-oauth-cnea-secrets. If this is not known, it can be retrieved using kubectl get secrets custom_resource-was-oauth- cnea-secrets -o json -n default | jq -r ".data[\"client-id\"]" | base64 -d -- where custom_resource is the name of the cloud native Netcool Operations Insight components custom resource. • client_secret is the value of client_secret in secret custom_resource-was-oauth-cnea-secrets. If this is not known, it can be retrieved using kubectl get secrets custom_resource-was- oauth-cnea-secrets -o json -n default | jq -r ".data[\"client-secret\"]" | base64 -d -- where custom_resource is the name of the cloud native Netcool Operations Insight components custom resource • user is the DASH administrator username • pass is the DASH administrator password • host.domain is the fully qualified hostname of the Jazz® for Service Management application server. For example:

$ curl -k -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -d "grant_type=password&client_id=my-client-id&client_secret=my-client- secret&username=smadmin&password=password" https://testserver1.test:16311/oauth2/endpoint/ NetcoolOAuthProvider/token | sed -e 's/[{}\,]/\n/g' { "access_token": "AccessTokenExample", "token_type": "Bearer", "expires_in": 3600, "scope": "", "refresh_token": "RefreshTokenExample" }

b) From your cloud native Netcool Operations Insight components cluster, verify that access to the on-premises installation is authorized with the access token that was returned in the previous step.

curl -k -H 'Accept: application/json' -H "Authorization: Bearer access_token" https:// host.domain:16311/ibm/console/dashauth/DASHUserAuthServlet

Chapter 3. Installing Netcool Operations Insight 177 where • access_token is the value of access_token, as returned in the previous step. • host.domain is the fully qualified hostname of the Jazz® for Service Management application server. For example:

$ curl -k -H 'Accept: application/json' -H "Authorization: Bearer AccessTokenExample" https://testserver1.test.com:16311/ibm/console/dashauth/DASHUserAuthServlet' { "user": { "firstname": "smadmin", "surname": "smadmin", "roles": [ "iscadmins", "noi_engineer", "ncw_user", "suppressmonitor", "netcool_rw", "monitor", "chartAdministrator", "ncw_gauges_viewer", "operator", "samples", "netcool_ro", "iscusers", "ncw_admin", "configurator", "administrator", "chartCreator", "chartViewer", "noi_operator", "ncw_gauges_editor", "noi_lead", "ncw_dashboard_editor", "noi_administrator" ], "id": "smadmin", "username": "uid=smadmin,o=defaultWIMFileBasedRealm" } }

7. (Optional) Open the Event Viewer. A new view called Example_IBM_CloudAnalytics has been created on your on-premises installation, with three columns: Grouping, Seasonal and Topology.

Retrieving passwords from secrets After a successful installation of the cloud native Netcool Operations Insight components, passwords can be retrieved from the secrets that contain them.

About this task For example, to retrieve the couchdb password, use the following procedure.

oc get secret custom_resource-couchdb-secret -o json -n namespace | grep password | cut -d : - f2 | cut -d '"' -f2 | base64 -d;echo

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace in which the cloud native Netcool Operations Insight components are installed.

178 IBM Netcool Operations Insight: Integration Guide Changing passwords and recreating secrets Changes to any of the passwords used by the cloud native Netcool Operations Insight components will require the secrets that use those passwords to be recreated, and the pods that use those secrets to be restarted. Use the following procedure if you need to change any of these passwords.

Procedure Use this table to help you identify the secret that uses a password, and the pods that use a secret.

Password Corresponding secret Dependent pods

couchdb custom_resource-couchdb-secret custom_resource-couchdb custom_resource-ibm-hdm- analytics-dev- aggregationcollaterservice custom_resource-ibm-hdm- analytics-dev-trainer

hdm custom_resource-cassandra- custom_resource-cassandra auth-secret redis custom_resource-ibm-redis- custom_resource-ibm-hdm- authsecret analytics-dev-collater- aggregationservice custom_resource-ibm-hdm- analytics-dev-dedup- aggregationservice

kafka custom_resource-kafka-admin- custom_resource-ibm-hdm- secret analytics-dev-archivingservice custom_resource-ibm-hdm- analytics-dev-collater- aggregationservice custom_resource-ibm-hdm- analytics-dev-dedup- aggregationservice custom_resource-ibm-hdm- analytics-dev-inferenceservice custom_resource-ibm-hdm- analytics-dev-ingestionservice custom_resource-ibm-hdm- analytics-dev-normalizer- aggregationservice

admin custom_resource-kafka-client- custom_resource-ibm-hdm- secret analytics-dev-archivingservice custom_resource-ibm-hdm- analytics-dev-collater- aggregationservice custom_resource-ibm-hdm- analytics-dev-dedup- aggregationservice

Chapter 3. Installing Netcool Operations Insight 179 Password Corresponding secret Dependent pods

custom_resource-ibm-hdm- analytics-dev-inferenceservice custom_resource-ibm-hdm- analytics-dev-ingestionservice custom_resource-ibm-hdm- analytics-dev-normalizer- aggregationservice

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). To change a password use the following procedure. 1. Change the password that you wish to change. 2. Use the table at the start of this topic to find the secret that corresponds to the password that has been changed, and delete this secret.

kubectl delete secret secretname --namespace namespace

Where • secretname is the name of the secret to be recreated. • namespace is the name of the namespace in which the secret to be recreated exists. 3. Recreate the secret with the desired new password. See “Configuring authentication” on page 149 for instructions on how to create the required secret. 4. Use the table at the start of this topic to find which pods depend on the secret that you have recreated and will require restarting. 5. Restart the required pods using

kubectl delete pod podname -n namespace

Where • podname is the name of the pod that requires restarting. • namespace is the name of the namespace in which the pod to be restarted exists.

Using custom certificates for routes Red Hat OpenShift automatically generates TLS certificates for external routes, but you can use your own certificate instead. Learn how to update external routes to use custom certificates on OpenShift. Internal TLS microservice communications are not affected. You can update the OpenShift ingress to use a custom certificate for all external routes across the cluster. For more information, see https://docs.openshift.com/container-platform/4.4/authentication/certificates/ replacing-default-ingress-certificate.html. If required, you can add a custom certificate for a single external route. For more information, see https:// docs.openshift.com/container-platform/4.4/networking/routes/secured-routes.html.

Uninstalling a hybrid installation Use this information to uninstall your hybrid deployment, or to uninstall just the cloud native Netcool Operations Insight components of your hybrid deployment and retain a working on-premises Operations Management installation.

About this task

180 IBM Netcool Operations Insight: Integration Guide Procedure 1. Delete the Netcool Operations Insight operator instance name by running this command:

oc delete noihybrid noi_operator_instance_name

2. Delete the noi-operator from your hybrid deployment. You can do this from the command line:

oc delete deployment noi-operator oc delete role noi-operator oc delete rolebinding noi-operator oc delete clusterrole noi-operator oc delete clusterrolebinding noi-operator-cluster oc delete serviceaccount noi-operator

or from the OLM UI: Select Operator > Installed Operators under the Navigation page, select your Project, and select Uninstall the Netcool Operations Insight Operator. 3. Remove the Custom Resource Definitions (CRD) for the cloud components. Find the CRDs for the cloud components with the following command:

oc get crd | egrep "noi|cem|asm" -n namespace

Where namespace is the name of the namespace where your cloud components are deployed. For example:

oc get crd |egrep "noi|cem|asm" asmformations.asm.ibm.com 2020-05-17T19:44:53Z asmhybrids.asm.ibm.com 2020-05-17T19:44:53Z asms.asm.ibm.com 2020-05-17T19:44:53Z cemformations.cem.ibm.com 2020-05-17T19:44:53Z cemserviceinstances.cem.ibm.com 2020-05-17T19:44:53Z noiformations.noi.ibm.com 2020-05-17T19:42:28Z noihybridlayers.noi.ibm.com 2020-05-17T19:42:28Z noihybrids.noi.ibm.com 2020-05-17T19:42:28Z nois.noi.ibm.com 2020-05-17T19:42:28Z

Delete each of the CRDs found above with the following command:

oc delete crd crd_name -n namespace

Where • crd_name is the name of the CRD to be deleted. • namespace is the name of the namespace where your cloud components are deployed. 4. Remove the secrets used by the cloud native Netcool Operations Insight components. Run the following command to find and output a list of secrets. Then paste the output on to the command line to delete the required secrets.

oc get secrets | egrep "tls-secret|Opaque" | awk '{print "kubectl delete secret " $1}'

5. Clean up your local storage. 6. Remove PVCs. Persistent Volume Claims (PVCs) are not deleted from StatefulSet pods. These can be retained, and will be successfully bound in subsequent deployments if required. Run the following command to find and output a list of PVCs. Then paste the output on to the command line to delete the required PVCs.

oc get pvc --all-namespaces | egrep -v "NAME|kube-system|cert-manager" | awk '{print "kubectl delete pvc "$2" --namespace="$1}'

Where namespace is the name of the namespace where your cloud native Netcool Operations Insight components are deployed.

Chapter 3. Installing Netcool Operations Insight 181 7. If you want to also remove your on-premises Operations Management installation, see “Uninstalling on premises” on page 98.

182 IBM Netcool Operations Insight: Integration Guide Chapter 4. Upgrade and roll back of Operations Management

Plan the upgrade and complete any pre-upgrade tasks before upgrading or rolling back Operations Management.

Upgrade and roll back on premises Follow these instructions to upgrade or roll back Netcool Operations Insight on premises.

Before you begin Back up all products and components in the environment. Related concepts Connections in the Server Editor Related tasks Restarting the Web GUI server Configuring Reporting Services for Network Manager Related reference Web GUI server.init file Related information Gateway for Message Bus documentation Operations Analytics - Log Analysis Welcome pageWithin the Operations Analytics - Log Analysis Welcome page, proceed as follows: (1) Select the version of interest. (2) For information on backing up and restoring Operations Analytics - Log Analysis data, or on installing Operations Analytics - Log Analysis, perform a relevant search.

Updated versions in the V1.6.1 release In order to perform the most recent upgrade of Netcool Operations Insight, from V1.6.0.3 to V1.6.1, you must download the software described in this table, from either Passport Advantage or from Fix Central. Note: If a table cell in either the Download from Passport Advantage column or the Download from Fix Central column is empty then there is nothing to download from that location for that particular product or component.

Table 35. Software downloads for upgrade from Netcool Operations Insight from V1.6.0.3 to V1.6.1

Download from Passport Download from Fix Product or component Target release Advantage Central

IBM Tivoli Netcool/ V8.1.0.22 CJ5N1EN V8.1 Fix Pack 22 OMNIbus core components

Tivoli Netcool/OMNIbus V8.1.0.19 V8.1 Fix Pack 19 Web GUI

IBM Tivoli Netcool/Impact V7.1.0.19 CJ5N2EN V7.1 Fix Pack 19

Db2 V11.1 CJ3INML V11.5 CJ7QGEN

© Copyright IBM Corp. 2020, 2020 183 Table 35. Software downloads for upgrade from Netcool Operations Insight from V1.6.0.3 to V1.6.1 (continued)

Download from Passport Download from Fix Product or component Target release Advantage Central

Operations Analytics - Log V1.3.6 CJ5N3EN Analysis

IBM Tivoli Network V4.2.0.10 CJ5N4EN V4.2 Fix Pack 9 Manager IP Edition

Network Health V4.2 CJ0S2EN V4.2 Dashboard

IBM Tivoli Netcool V6.4.2.11 CJ5N5EN V6.4.2 Fix Pack 11 Configuration Manager

IBM Network V1.3.1 CJ5N6EN Performance Insight

IBM Agile Service V1.1.8 CJ5N7EN Manager

IBM Agile Service V1.1.8 CJ5N8EN Manager Observers

Jazz for Service V1.1.3.7 CJ49VML Management

Downloading product and components In order to upgrade from V1.6.0 to V1.6.1, you must download software from Passport Advantage and Fix Central.

About this task This scenario describes how to upgrade Netcool Operations Insight from V1.6.0 to the current version, V1.6.1. The scenario assumes that Netcool Operations Insight is deployed as shown in the simplified architecture in the following figure. Depending on how your Netcool Operations Insight system is deployed, you will need to download the software and run the upgrade on different servers.

184 IBM Netcool Operations Insight: Integration Guide Figure 9. Simplified architecture for the upgrade scenario

Procedure 1. For information about where to obtain downloads for each product and component, see “Updated versions in the V1.6.1 release” on page 183. Note: You will need to log into IBM Passport Advantage or Fix Central, as appropriate, to download the software. 2. Download the software to the servers listed in the table.

Table 36. Which server to download software to If your current Netcool Then download any software To the following For more Operations Insight installation related to the following server details, see... includes... products...

Netcool Operations Insight base Netcool/OMNIbus Server 1 “Applying the solution only latest fix packs” Netcool/Impact on page 186 Operations Analytics - Log Analysis Server 2

Jazz for Service Management Server 3 WebSphere Application Server

Network Management for Network Manager core components Server 4 Operations Insight solution extension Netcool Configuration Manager Server 4 core components

Performance Management for Network Performance Insight Server 5 “Upgrading Operations Insight solution Network extension Performance Insight” on page 191

Chapter 4. Upgrade and roll back of Operations Management 185 Table 36. Which server to download software to (continued) If your current Netcool Then download any software To the following For more Operations Insight installation related to the following server details, see... includes... products...

Service Management for Operations Agile Service Manager Base Server 3 “Installing on- Insight solution extension premises Agile Server 6 Service Manager” on Agile Service Manager Observers Server 6 page 95

Related information Passport AdvantageClick here to go to the IBM Passport Advantage website. Fix CentralClick here to go to the Fix Central website.

Applying the latest fix packs Apply any latest available fix packs to upgrade to the latest version of Netcool Operations Insight.

About this task Fix packs can be full image fix packs containing the full product image, or upgrade fix packs, containing just the code for fix updates from the last release. Full image fix packs are made available on Passport Advantage. Upgrade fix packs are made available on Fix Central. For a list of fix packs required for upgrade to the latest version of Netcool Operations Insight, see “Updated versions in the V1.6.1 release” on page 183. If you have a hybrid installation, the Update option in Installation Manager is not supported for the Netcool Hybrid Deployment Option Integration Kit kit. For more information, see “Installing the integration kit” on page 167.

Procedure 1. For each fix pack upgrade, start Installation Manager and configure it to point to the repository.config file for the fix pack. 2. In the main Installation Manager window, click Update and complete wizard instructions similar to the following: a) In the Update Packages tab, select the product group to find related update packages, and click Next. A list of the available update packages displays. b) From the list of available update packages, select the relevant version, and click Next. c) In the Licenses tab, review the licenses. Select I accept the terms in the license agreements and click Next. d) In the Features tab, select the features for your update package, and click Next. e) Complete the configuration details, and click Next. f) In the Summary tab, review summary details. If you need to change any detail click Back, but if you are happy with summary details click Update and wait for the installation of the update package to complete. g) When the installation of the update package completes, the window updates with details of the installation. Click Finish. Related information Fix CentralClick here to go to the Fix Central website.

186 IBM Netcool Operations Insight: Integration Guide Roll back on-premises Netcool Operations Insight from V1.6.1 to V1.6.0.3 To roll back an on-premises install, use Installation Manager's roll back functionality to roll back each of the fixpacks that you applied during your Netcool Operations Insight upgrade.

Before you begin The components that must be rolled back are the components that were upgraded for the V1.6.0.3 to V1.6.1 upgrade. For more information, see “Updated versions in the V1.6.1 release” on page 183.

Procedure 1. Start IBM Installation Manager in your preferred mode. For more information, see https:// www.ibm.com/support/knowledgecenter/SSDV2W/im_family_welcome.html . Note: Not all products and components support all installation modes, and some products may require processes to be stopped. Order of rollback may also be significant. See individual product or component documentation for information on this. 2. In Installation Manager, click or select Roll Back. 3. From the Package Group Name list, select the package group that contains the packages that you want to roll back. Click or select Next. 4. Select the version of the package that you want to roll back to, and then click or select Next. 5. Review the Summary information and then click or select Roll Back. 6. Repeat until all the required packages have been rolled back.

Upgrading Event Analytics Follow these instructions to upgrade Event Analytics to the latest version.

Upgrading Event Analytics You can upgrade the IBM Netcool Operations Insight packages for Event Analytics by applying the latest fix packs.

About this task To perform the upgrade, use the IBM Installation Manager Update functions to locate update packages, and update your environment with the following product update packages: • Packages for IBM Tivoli Netcool/Impact: IBM Tivoli Netcool/Impact GUI Server_7.1.0.19 IBM Tivoli Netcool/Impact Server_7.1.0.19 IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight_7.1.0.19 • Packages for IBM Tivoli Netcool/OMNIbus: IBM Tivoli Netcool/OMNIbus_8.1.0.22 • Packages for IBM Tivoli Netcool/OMNIbus Web GUI: IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19

Procedure The product update packages must be updated individually. Complete steps 1 - 3 for each product update package. 1. Start Installation Manager. Change to the /eclipse subdirectory of the Installation Manager installation directory and enter the following command to start Installation Manager: ./IBMIM

Chapter 4. Upgrade and roll back of Operations Management 187 2. Configure Installation Manager to point to either a local repository or an IBM Passport Advantage repository, where the download package is available. Within the IBM Knowledge Center content for Installation Manager, see the topic that is called Installing packages by using wizard mode. See the following URL within the IBM Knowledge Center content for Installation Manager: http://www-01.ibm.com/support/knowledgecenter/SSDV2W/im_family_welcome.html 3. In the main Installation Manager window, click Update and complete the following type of installation wizard instructions to complete the installation of your update package: a) In the Update Packages tab, select the product group to find related update packages, and click Next. A list of the available update packages displays. b) From the list of available update packages, select one update package that you want to install, and click Next. Remember you can install only one update package at a time. c) In the Licenses tab, review the licenses. Select I accept the terms in the license agreements and click Next. d) In the Features tab, select the features for your update package, and click Next. e) Complete the configuration details, and click Next. f) In the Summary tab, review summary details. If you need to change any detail click Back, but if you are happy with summary details click Update and wait for the installation of the update package to complete. g) When the installation of the update package completes, the window updates with details of the installation. Click Finish. 4. To ensure that the seasonal event reports that were created before upgrading are visible, you must run the SE_CLEANUPDATA policy as follows. a) Login as the administrator to the server where IBM Tivoli Netcool/Impact is stored and running. b) Navigate to the policies tab and search for the SE_CLEANUPDATA policy. c) To open the policy, double-click on the policy. d) To run the policy, select the run button on the policy screen toolbar. 5. To view the event configurations in the View Seasonal Events portlet, rerun the configurations. For more information about running event configurations, see the “Administering analytics configurations” on page 405 topics.

What to do next 1. Verify that the correct packages are installed. After you update each package, and to ensure that you have the correct environment, verify that the following packages are installed. • Packages for IBM Tivoli Netcool/Impact: IBM Tivoli Netcool/Impact GUI Server_7.1.0.19 IBM Tivoli Netcool/Impact Server_7.1.0.19 IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight_7.1.0.19 • Packages for IBM Tivoli Netcool/OMNIbus: IBM Tivoli Netcool/OMNIbus_8.1.0.22 • Packages for IBM Tivoli Netcool/OMNIbus Web GUI: IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 2. Configure the ObjectServer for Event Analytics. For more information about configuring the ObjectServer for Event Analytics, see “Configuring the ObjectServer ” on page 291. 3. Connect to a valid database from within IBM Tivoli Netcool/Impact. To configure a connection to one of the Event Analytics supported databases, see the following topics: • Db2: “Configuring Db2 database connection within Netcool/Impact” on page 273

188 IBM Netcool Operations Insight: Integration Guide • Oracle: “Configuring Oracle database connection within Netcool/Impact” on page 275 • MS SQL: “Configuring MS SQL database connection within Netcool/Impact” on page 277 4. If you add a cluster to the Impact environment, you must update the data sources in IBM Tivoli Netcool/Impact 7.1. For more information, see “Configuring extra failover capabilities in the Netcool/ Impact environment” on page 293. 5. If you want to make use of the pattern generalization feature in Event Analytics, you must configure the type properties used for event pattern creation in IBM Tivoli Netcool/Impact. For more information about configuring the type properties used for event pattern creation in IBM Tivoli Netcool/Impact, see “Configuring event pattern processing” on page 284.

Upgrading Event Analytics from stand-alone installations of Netcool/OMNIbus and Netcool/Impact If you have stand-alone installations of Netcool/OMNIbus with Web GUI and Netcool/Impact, you can perform the upgrade.

Before you begin Ensure that the following product packages are already installed: • IBM Tivoli Netcool/Impact packages: IBM Tivoli Netcool/Impact GUI Server_7.1.0.19 IBM Tivoli Netcool/Impact Server_7.1.0.19 • IBM Tivoli Netcool/OMNIbus packages: IBM Tivoli Netcool/OMNIbus_8.1.0.22 • IBM Netcool packages: IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19

About this task This upgrade scenario is for users who already use Tivoli Netcool/OMNIbus and Netcool/Impact but do not have the Netcool Operations Insight packages that are needed for the Event Analytics function, and now want the Event Analytics function. For this upgrade scenario, you must use IBM Installation Manager to Install the product packages that are required for the Event Analytics function, then Update the product packages. The Install of product packages locates and installs the following two packages: IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight_7.1.0.19 Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 To perform the upgrade, complete the following steps.

Procedure 1. Start Installation Manager. Change to the /eclipse subdirectory of the Installation Manager installation directory and enter the following command to start Installation Manager: ./IBMIM 2. Configure Installation Manager to point to either a local repository or an IBM Passport Advantage repository, where the download package is available. Within the IBM Knowledge Center content for Installation Manager, see the topic that is called Installing packages by using wizard mode. See the following URL within the IBM Knowledge Center content for Installation Manager: http://www-01.ibm.com/support/knowledgecenter/SSDV2W/im_family_welcome.html 3. To install your packages in the main Installation Manager, click Install and complete the steps in the installation wizard to complete the installation of your packages: a) In the Install tab, select the following product groups and product installation packages, and click Next.

Chapter 4. Upgrade and roll back of Operations Management 189 • IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight_7.1.0.19 • Tivoli Netcool/OMNIbus Web GUI Version 8.1.0.19 • Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 b) In the Licenses tab, review the licenses. When you are happy with the license content select I accept the terms in the license agreements and click Next. c) In the Location tab, use the existing package group and location. d) In the Features tab, select the features for your packages, and click Next. e) In the Summary tab, review summary details. If you need to change any detail click Back, but if you are happy with summary details click Install and wait for installation of the package to complete. f) When installation of the packages completes, the window updates with details of the installation. Click Finish. 4. Migrate the rollup configuration. For more information about updating the rollup configuration, see “Adding columns to seasonal and related event reports” on page 278.

What to do next 1. Verify that the correct packages are installed. After you update each package, and to ensure that you have the correct environment, verify that the following packages are installed. • Packages for IBM Tivoli Netcool/Impact: IBM Tivoli Netcool/Impact GUI Server_7.1.0.19 IBM Tivoli Netcool/Impact Server_7.1.0.19 IBM Tivoli Netcool/Impact Server Extensions for Netcool Operations Insight_7.1.0.19 • Packages for IBM Tivoli Netcool/OMNIbus: IBM Tivoli Netcool/OMNIbus_8.1.0.22 • Packages for IBM Tivoli Netcool/OMNIbus Web GUI: IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI_8.1.0.19 2. Configure the ObjectServer for Event Analytics. For more information about configuring the ObjectServer for Event Analytics, see “Configuring the ObjectServer ” on page 291. 3. Connect to a valid database from within IBM Tivoli Netcool/Impact. To configure a connection to one of the Event Analytics supported databases, see the following topics: • Db2: “Configuring Db2 database connection within Netcool/Impact” on page 273 • Oracle: “Configuring Oracle database connection within Netcool/Impact” on page 275 • MS SQL: “Configuring MS SQL database connection within Netcool/Impact” on page 277 4. If you add a cluster to the Impact environment, you must update the data sources in IBM Tivoli Netcool/Impact. For more information, see “Configuring extra failover capabilities in the Netcool/ Impact environment” on page 293. 5. If you want to make use of the pattern generalization feature in Event Analytics, you must configure the type properties used for event pattern creation in IBM Tivoli Netcool/Impact. For more information about configuring the type properties used for event pattern creation in IBM Tivoli Netcool/Impact, see “Configuring event pattern processing” on page 284.

Migration of rollups in Netcool/Impact v7.1.0.13 and later Fix pack v7.1.0.13 uses a new format for the creation of rollups that is different to previous versions of Netcool/Impact. A migration script is automatically executed during the install or upgrade process to convert pre-existing rollups to the v7.1.0.13 format. Run the Event Analytics configuration wizard after the upgrade to v7.1.0.13 to verify and save your configuration (see note below).

190 IBM Netcool Operations Insight: Integration Guide • In v7.1.0.12 (or earlier) the rollup display names are free-form text with no formatting applied. For example:

reevent_rollup_1_column_name=ORIGINALSEVERITY reevent_rollup_1_type=MAX reevent_rollup_1_display_name=MaxSeverity

In this scenario, Netcool Operations Insight creates a new column in the database called MaxSeverity. The display name in Dashboard Application Services Hub will be MaxSeverity, or whatever is defined in the customization directory in the Netcool/Impact uiprovider directory. With the introduction of the Event Analytics configuration wizard in v7.1.0.13, it was necessary to apply a new format to rollup display names. • In v7.1.0.13 a format of _ is applied to rollup display names: For example:

reevent_rollup_1_column_name=ORIGINALSEVERITY reevent_rollup_1_type=MAX reevent_rollup_1_display_name=ORIGINALSEVERITY_MAX

Using the wizard, you can apply any display name to a column, in any language. Because creating database columns in any language could have been error prone, the format of the names for rollup database columns is now set to _. The display name is stored in the customization directory and files in the uiproviderconfig directory. This format makes it is possible to change display names using the Event Analytics configuration wizard. For this reason, a migration script is executed during install/upgrade to transform all pre-existing rollups to the v7.1.0.13 format. Note: You must run the Event Analytics configuration wizard after upgrading to Netcool/Impact v7.1.0.13. The following artifacts will be changed as a result of the rollup migration script in Netcool/Impact v7.1.0.13: • Stored metadata for rollups in configuration • Database columns (renamed) • Output parameters for policies • Properties files • Properties files rendered into a non-English language Complete the steps of the wizard as described in “Configuring Event Analytics using the wizard” on page 264 after upgrading to v7.1.0.13 to verify and save any customizations to your configuration. Backup files containing previous customizations are stored in $IMPACT_HOME/backup/install/gui_backup/ /uiproviderconfig/.

Upgrading Network Performance Insight Upgrade Network Performance Insight to the latest release.

About this task Network Performance Insight has an built-in mechanism for performing product upgrades.

Procedure 1. On the server where Network Performance Insight is installed (in our example, server 5), extract the Network Performance Insight eAssembly archive. 2. See the Network Performance Insight documentation at https://www.ibm.com/support/ knowledgecenter/SSCVHB_1.3.1/npi_kc_welcome.html for more information on how to perform the upgrade.

Chapter 4. Upgrade and roll back of Operations Management 191 Installing on-premises Agile Service Manager Install the latest version of Agile Service Manager.

Procedure 1. To install the Agile Service Manager Core services, and Observers, proceed as follows: a) On the server where Agile Service Manager Core services are installed (in our example, server 6), extract the Agile Service Manager Base and Observer eAssembly archives. b) Follow the instructions in the Agile Service Manager documentation to complete the installation. 2. To install the Agile Service Manager UI, proceed as follows: a) On the server where Dashboard Application Services Hub is installed (in our example, server 3), extract the Agile Service Manager Base eAssembly archive. b) Start Installation Manager and configure it to point to the following repository files: repository.config file for Agile Service Manager c) See the Agile Service Manager documentation for more information on how to perform the installation, and perform post-installation configuration tasks, such as configuring the Gateway for Message Bus to support the Agile Service Manager Event Observer. Related information Netcool Agile Service Manager Knowledge Center

192 IBM Netcool Operations Insight: Integration Guide Chapter 5. Configuring

Perform the following tasks to configure the components of Netcool Operations Insight.

Configuring Cloud and hybrid systems Perform the following tasks to configure your Cloud or hybrid Netcool Operations Insight system.

Enabling SSL communications from Netcool/Impact on OpenShift Learn how to configure Secure Sockets Layer (SSL) communications from IBM Tivoli Netcool/Impact on Red Hat OpenShift.

About this task For information about enabling SSL communications from an on-premises deployment of Netcool/Impact, see https://www.ibm.com/support/knowledgecenter/en/SSSHYH_7.1.0/com.ibm.netcoolimpact.doc/ admin/imag_enablingssl_for_external_servers.html . To enable SSL communications from an Netcool Operations Insight on OpenShift deployment, complete the following steps:

Procedure 1. Add your external certificate to the YAML file:

vi -nciserver-external-cacerts.yaml

For example: Note: You must indent the certificate in the YAML file.

apiVersion: v1 kind: ConfigMap metadata: name: -nciserver-external-cacerts data: file.crt: | -----BEGIN CERTIFICATE----- MIIDRTCCAi2gAwIBAgIJAMWULciaKp4bMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV .. WkUE81/qflUaSOVZRneo3xvkmYNfiYBkpw== -----END CERTIFICATE-----

Where is your deployed release name. 2. Generate the configmap from the YAML file, by running the kubectl create command, as in the following example:

kubectl create -f -nciserver-external-cacerts.yaml

The configmap can also be created from the certificate, as in the following example:

kubectl create configmap -nciserver-external-cacerts --from-file=./cert.pem

3. If you deployed Netcool Operations Insight on OpenShiftwith the CLI, as described in the “Installing operators with the CLI using PPA” on page 120 topic, complete the following steps: a) Create the nciserver.importNCICACerts.enabled property and set it to true in the custom resource YAML file, as in the following example:

helmValuesNOI: nciserver.importNCICACerts.enabled: true

Edit one of the following custom resource YAML files:

© Copyright IBM Corp. 2020, 2020 193 • For a cloud deployment with the CLI: deploy/crds/noi.ibm.com_nois_cr.yaml • For a hybrid deployment with the CLI: deploy/crds/noi.ibm.com_noihybrids_cr.yaml b) Apply the nciserver.importNCICACerts.enabled property by running the following command:

kubectl apply -f .yaml

Where is the file name and path of your custom resource YAML file, for example deploy/crds/noi.ibm.com_nois_cr.yaml. 4. If you deployed Netcool Operations Insight on OpenShiftwith the Operator Lifecycle Management (OLM) console, as described in the “Installing operators with the Operator Lifecycle Management console” on page 119 topic, complete the following steps: a) Edit the deployment from the OLM console. Edit and save the YAML file directly in the console. Your changes are auto-deployed. 5. Delete the Netcool/Impact core server pod with the kubectl delete command:

kubectl delete pod cnea-nciserver-0

The Netcool/Impact core server pod is restarted with the external certs in the trust.jks file. SSL communications from the Netcool/Impact core server pod is enabled.

Connecting a Cloud system to event sources Learn how to connect to event sources.

Connecting event sources to Netcool Operations Insight on a Cloud deployment After you successfully deploy Netcool Operations Insight, you can connect to an on-premises event source such as an IBM Tivoli Netcool/OMNIbus probe or gateway. You can connect directly to the ObjectServer NodePort or you can connect with a proxy NodePort. You can configure connections to Netcool Operations Insight in two ways. The primary method to connect your event sources is to make a direct Transmission Control Protocol (TCP) connection to the ObjectServer NodePort. This method supports plain text connections. If Transport Layer Security (TLS) encryption is required, you can connect with a proxy NodePort. This method supports plain text and TLS encrypted connections.

Connecting with the proxy NodePort Learn how to connect to the ObjectServer from outside the OpenShift deployment by using the secure connection proxy with Transport Layer Security (TLS) encryption. The proxy provides a secure TLS encrypted connection for clients that require a direct Transmission Control Protocol (TCP) connection to the ObjectServer instance running on OpenShift. Typically, clients such as Netcool/OMNIbus Probes and Gateways require this type of connection. The clients can be installed in a traditional on-premise installation or deployed in another OpenShift cluster. The proxy is deployed automatically as part of the ibm-netcool-prod deployment. By default, the proxy is deployed with a TLS certificate, which is automatically created and signed by the OpenShift cluster Certificate Authority (CA) during deployment. However, it is also possible to use a custom certificate that has been signed by an external CA. For more information about configuring the proxy and the proxy config map, see “Proxy configmap” on page 695.

194 IBM Netcool Operations Insight: Integration Guide Identifying the proxy listening port To connect to the IBM Tivoli Netcool/OMNIbus Object Server pair from outside the OpenShift cluster with Transport Layer Security (TLS) encryption, you must identify the externally accessible NodePort where the proxy listens for connections.

About this task The proxy defines a Kubernetes service, called custom_resource-proxy, where custom_resource is the name of the custom resource for your deployment. The custom_resource-proxy service defines the NodePorts that clients must use when connecting to the Object Server pair.

Procedure 1. Describe the proxy service by running the following command:

kubectl get service -o yaml custom_resource-proxy -n namespace

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace in which Operations Management is installed. 2. Identify the NodePorts from the command output, for example:

ports: - name: aggp-proxy-port nodePort: 30135 port: 6001 protocol: TCP targetPort: 6001 - name: aggb-proxy-port nodePort: 30456 port: 6002 protocol: TCP targetPort: 6002

In the example, the NodePort for the primary Object Server is 30135 and the NodePort for the backup Object Server is 30456.

Results Make a note of the NodePorts that you identified. This information is required when configuring the client's Transport Layer Security (TLS) connection.

Configuring TLS encryption with Red Hat OpenShift Follow this procedure when the proxy certificate has been automatically created and signed by the Red Hat OpenShift cluster CA during deployment.

Procedure 1. From the event source client, ensure that a connection can be made to a cluster master node. For example:

ping ${OCP_MASTER_ADDRESS}

Where ${OCP_MASTER_ADDRESS} is resolvable network address for a cluster master node, for example master0.ocp42.ibm.com. Note: Only a single master node is specified in this example. For production environments, it can be desirable to configure a load balancer between the master nodes to enable high availability and prevent a single point of failure. 2. Using OpenSSL from the event source client, identify the certificate common name (CN), from your IBM Netcool Operations Insight on-premises deployment:

Chapter 5. Configuring 195 # openssl s_client -connect ${OCP_MASTER_ADDRESS}:${AGG_PROXY_PORT} CONNECTED(00000003) depth=1 CN = openshift-service-serving-signer@1578571170 verify error:num=19:self signed certificate in certificate chain --- Certificate chain 0 s:/CN=m125-proxy.default.svc <<<<<<<<<<<<<< i:/CN=openshift-service-serving-signer@1578571170 1 s:/CN=openshift-service-serving-signer@1578571170 i:/CN=openshift-service-serving-signer@1578571170 ---

Where: • ${OCP_MASTER_ADDRESS} is the address of a cluster master node from step 1. • ${AGG_PROXY_PORT} is the cluster NodePort identified by “Identifying the proxy listening port” on page 195 In the example above, the Common Name of the certificate that is presented by the proxy is m125- proxy.default.svc. 3. Using the OpenShift Cluster CLI, extract the OpenShiftcluster signer certificate by running the command:

oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data "tls.crt"}}' | base64 --decode > cluster-ca-cert.pem

4. Run the ping ${PROXY_COMMON_NAME} command. If this command fails, because the name cannot be resolved, ask your DNS administrator to add this entry or use the following commands to add this host to your /etc/hosts file. On the event source client, in the network hosts file, map the certificate common name to the IP address of an OpenShift master node, running, for example:

echo "${OCP_MASTER_ADDRESS} ${PROXY_COMMON_NAME}" >> /etc/hosts

Where: • ${OCP_MASTER_ADDRESS} is the address of a cluster master node from step 1. • ${PROXY_COMMON_NAME} is the proxy certificate common name from step 2. 5. Import the OpenShift cluster signer certificate that is obtained in step 3 into the event source client keystore as a trusted certificate. Complete the following steps: a) If necessary, create the keystore by using one of the following commands:

$NCHOME/bin/nc_ikeyman

Or

$NCHOME/bin/nc_gskcmd -keydb -create -db "$NCHOME/etc/security/keys/omni.kdb" -pw password -stash -expire 366

For more information about creating a keystore, see https://www.ibm.com/support/ knowledgecenter/en/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/ task/omn_con_ssl_creatingkeydbase.html Netcool/OMNIbus Knowledge Center. b) Import a privacy enhanced mail (PEM) encoded signer certificate by running one of the following commands:

$NCHOME/bin/nc_ikeyman

Or

$NCHOME/bin/nc_gskcmd -cert -add -file cluster-ca-cert.pem -db $NCHOME/etc/security/keys/ omni.kdb -stashed

196 IBM Netcool Operations Insight: Integration Guide For more information about adding certificates from CA, see https://www.ibm.com/support/ knowledgecenter/en/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/ task/omn_con_ssl_addingcerts.html Netcool/OMNIbus Knowledge Center.. 6. Note: To successfully complete the TLS handshake and establish a secure TLS connection, the ObjectServer address, which is specified in the omni.dat file, must exactly match the certificate subject CN value. Edit the client's omni.dat file to configure a Secure Sockets Layer (SSL) connection. Add the proxy Common Name value from step 2 as the server address and the proxy NodePort port number in the omni.dat file, as displayed in the following example:

[OCP_AGG_P_TLS] { Primary: ${PROXY_COMMON_NAME} ssl ${AGGP_PROXY_PORT} } [OCP_AGG_B_TLS] { Primary: ${PROXY_COMMON_NAME} ssl ${AGGB_PROXY_PORT} }

For more information, see “Identifying the proxy listening port” on page 195. 7. Run the following command to generate the interfaces file:

$NCHOME/bin/nco_igen

Configuring TLS encryption with a custom certificate on Red Hat OpenShift The proxy requires a public certificate and private key pair to be supplied through a Kubernetes secret called {{ .Release.Name }}-proxy-tls-secret. If you want to use a custom certificate, for example, one signed by your own public key infrastructure Certificate Authority (CA), create your own proxy secret, containing the public certificate and private key pair, before deployment. To enable a successful Transport Layer Security (TLS) handshake, import the CA signer certificate into the keystore of any client application as a trusted source.

Before you begin Note: If you deployed IBM Netcool Operations Insight on OpenShift V3.2.1, configure TLS encryption with the default certificate. For more information, see Configuring TLS encryption with the default certificate on IBM Cloud Private. Before deploying on Red Hat OpenShift, you can create your own certificate key pair and create the proxy TLS secret by completing the following steps:

About this task |Follow this procedure when the public certificate and private key have already been created and signed by an external CA. When creating the certificate, it is important to ensure that the subject Common Name (CN) field matches the following format:

proxy.custom_resource.fqdn

Where • fqdn is the fully qualified domain name (FQDN) of the cluster's master node. • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install).

Procedure 1. Set the global.tls.certificate.useExistingSecret global property in the Helm chart. 2. Create the proxy TLS secret by running the following command:

Chapter 5. Configuring 197 kubectl create secret tls custom_resource-proxy-tls-secret --cert=certificate.pem -- key=key.pem [--namespace namespace]

Where: • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • certificate.pem is the signed certificate returned by the CA. • key.pem is the private key corresponding to the signed certificate. 3. To establish a successful TLS connection, import the CA public certificate, which is used in step “2” on page 197. Complete the following steps: a. If necessary, create the keystore using one of the following commands:

$NCHOME/bin/nc_ikeyman

or

• $NCHOME/bin/nc_gskcmd -keydb -create -db "$NCHOME/etc/security/keys/omni.kdb" -pw password -stash -expire 366

For more information about creating a keystore, see the Netcool/OMNIbus Knowledge Center, https://www.ibm.com/support/knowledgecenter/en/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/task/ omn_con_ssl_creatingkeydbase.html . b. Import a privacy enhanced mail (PEM) encoded signer certificate by running one of the following commands:

$NCHOME/bin/nc_ikeyman

or

• $NCHOME/bin/nc_gskcmd -cert -add -file mycert.pem -db $NCHOME/etc/security/keys/ omni.kdb -stashed

For more information about adding certificates from CAs, see the Netcool/OMNIbus Knowledge Center, https://www.ibm.com/support/knowledgecenter/en/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/install/task/omn_con_ssl_addingcerts.html . 4. Note: To successfully complete the TLS handshake and establish a secure TLS connection, the ObjectServer address, which is specified in the omni.dat file, must exactly match the certificate subject Common Name (CN) value. Certificates that are manually created must have a subject CN value in the following format:

proxy.custom_resource.fqdn

Edit the client's omni.dat file to configure a Secure Sockets Layer (SSL) connection. Specify the SSL for each Object Server entry and add the server address and port number in the omni.dat file, as displayed in the following example:

[AGG_P] { Primary: proxy.custom_resource.fqdn ssl 3XXXX } [AGG_B] { Primary: proxy.custom_resource.fqdn ssl 3XXXX }

For more information, see “Identifying the proxy listening port” on page 195. 5. Run the following command to generate the interfaces file:

198 IBM Netcool Operations Insight: Integration Guide $NCHOME/bin/nco_igen

Disabling TLS encryption To disable Transport Layer Security (TLS) encryption, edit the proxy configmap.

Procedure 1. Open the proxy configmap for editing.

kubectl edit configmap custom-resource-proxy-config -n namespace

Where • custom_resource is the name of the custom resource for your deployment. • namespace is the name of the namespace in which Netcool Operations Insight on Red Hat OpenShift is deployed. 2. Find the tlsEnabled flag and set it to false. Save and exit the configmap.

tlsEnabled: "false"

3. Find the proxy pod using the command

kubectl get pods --namespace namespace | grep proxy

Where namespace is the name of the namespace in whichNetcool Operations Insight on Red Hat OpenShift is deployed. 4. Restart the proxy pod.

kubectl delete pod proxy-pod -n namespace

Where • proxy-pod is the name of the proxy pod in your Netcool Operations Insight on Red Hat OpenShift deployment. • namespace is the name of the namespace in which Netcool Operations Insight on Red Hat OpenShift is deployed.

Connecting with the ObjectServer NodePort Learn how to connect to the IBM Tivoli Netcool/OMNIbus ObjectServer failover pair from outside the Netcool Operations Insight on Red Hat OpenShift deployment.

Before you begin The ObjectServer NodePorts do not support Transport Layer Security (TLS) encryption. If TLS encryption is required, connect with a proxy NodePort. For more information, see “Connecting with the proxy NodePort” on page 194.

About this task Learn how to make a direct plain text Transmission Control Protocol (TCP) connection to the ObjectServer failover pair running in a IBM Netcool Operations Insight on Red Hat OpenShift deployment. Typically clients such as Netcool/OMNIbus Probes and Gateways require this type of connection to write event data into the deployment. Connection to the ObjectServer pair is enabled with cluster No Deports, which are defined by the following services:

custom_resource-objserv-agg-primary-nodeport custom_resource-objserv-agg-backup-nodeport

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install).

Chapter 5. Configuring 199 Procedure 1. Describe the primary ObjectServer service by running the following command:

kubectl get service custom_resource-objserv-agg-primary-nodeport -n namespace

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace in which Netcool Operations Insight on Red Hat OpenShift is installed. 2. Describe the backup ObjectServer service by running the following command:

kubectl get service custom_resource-objserv-agg-backup-nodeport -n namespace

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • namespace is the name of the namespace in which Netcool Operations Insight on Red Hat OpenShift is installed. 3. Identify the NodePorts from the command output, for example:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE custom_resource-objserv-agg-primary-nodeport NodePort 10.0.0.18 4100:32312/TCP,31581:31581/TCP 3h

and

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE custom_resource-objserv-agg-backup-nodeport NodePort 10.0.0.162 4100:30404/TCP,30302:30302/TCP 3h

In the examples, the NodePort for the primary ObjectServer is 32312 and the NodePort for the backup ObjectServer is 30404. Note: Ensure you have configured the cluster networking so that the NodePorts can be accessed from the client machine. For more information, consult the documentation for your cluster. 4. Combine the NodePort values with the public domain name or IP address of the cluster to form the address of the ObjectServer. Edit the client's omni.dat file to add entries for the primary and backup ObjectServer, as described in the following example:

[AGG_P] { Primary: mycluster.icp 32312 } [AGG_B] { Primary: mycluster.icp 30404 }

Where mycluster.icp is the public domain name of the cluster. 5. Run the following command to generate the interfaces file:

$NCHOME/bin/nco_igen

200 IBM Netcool Operations Insight: Integration Guide Connecting an on-premises Operations Management ObjectServer to Netcool Operations Insight on a Cloud deployment Learn how to configure a connection between an on-premises ObjectServer and a deployment of Netcool Operations Insight on Red Hat OpenShift. After you successfully deploy Netcool Operations Insight on Red Hat OpenShift, you can connect to an existing on-premises installation to create an event feed between your on-premises and cloud installations. A uni-directional gateway allows event data to flow in a single direction, either from on- premises to cloud, or from cloud to on-premises. A bidirectional gateway can be configured to allow event data to flow in both directions at the same time. The following figure shows the architecture. Figure 10. Architecture of an on-premises ObjectServer connected to an Netcool Operations Insight on Red Hat OpenShift deployment.

Configuring a uni-directional gateway

About this task Learn how to configure a uni-directional gateway to create an event feed from an on-premises ObjectServer to a deployment of Netcool Operations Insight on Red Hat OpenShift. An on-premises primary aggregation ObjectServer must exist and be configured according to Installing a primary aggregation ObjectServer.

Procedure 1. Identify the NodePort details of the Netcool Operations Insight on Red Hat OpenShift deployment's ObjectServers. Follow the instructions in “Connecting with the ObjectServer NodePort” on page 199 to identify the primary and backup ObjectServer node port details. 2. On the on-premises host, edit the omni.dat file and add an entry for a new uni-directional gateway,'ICP_GATE'. Add entries for the source and destination ObjectServers, AGG_P and ICP_AGG_V. The ICP_AGG_V entry has the cloud ObjectServer node port details that you identified in step 1. Example omni.dat file:

Chapter 5. Configuring 201 [AGG_P] { Primary: netcool1.onprem.fqdn 4100 } [ICP_GATE] { Primary: netcool2.onprem.fqdn 4300 } [ICP_AGG_V] { Primary: mycluster.icp 32312 Backup: mycluster.icp 30404 }

3. Run $NCHOME/bin/nco_igen to create the interfaces file. 4. Configure the uni-directional gateway table replication and mapping files, ICP_GATE_tblrep.def and ICP_GATE.map. These files control which ObjectServer tables and columns are replicated. This example table replication file replicates the alerts.status, alerts.journal, and alerts.details tables.

cat << EOF > $NCHOME/omnibus/etc/ICP_GATE.tblrep.def ############################################################################## # Netcool/OMNIbus Uni-directional ObjectServer Gateway 8.1.0 # # ICP_GATE table replication definition file. # # Notes: # ##############################################################################

REPLICATE INSERTS, UPDATES, FT_INSERTS FROM TABLE 'alerts.status' USING MAP 'StatusMap' ORDER BY 'Serial ASC';

REPLICATE INSERTS, UPDATES, FT_INSERTS FROM TABLE 'alerts.journal' USING MAP 'JournalMap';

REPLICATE INSERTS, UPDATES, FT_INSERTS FROM TABLE 'alerts.details' USING MAP 'DetailsMap'; EOF

This example map file maps the alerts.status, alerts.details, and alerts.journal tables.

cat << EOF > $NCHOME/omnibus/etc/ICP_GATE.map ############################################################################## # Netcool/OMNIbus Uni-directional ObjectServer Gateway 8.1.0 # # ICP_GATE Multitier map definition file. # # Notes: # # Fields that are marked as 'ON INSERT ONLY' will only be passed when an event # is inserted for the first time. (ie. they will not be updated). The ordering # of the fields is not important as the gateway will use named value insertion. # ############################################################################## CREATE MAPPING StatusMap ( 'Identifier' = '@Identifier' ON INSERT ONLY, 'Node' = '@Node' ON INSERT ONLY, 'NodeAlias' = '@NodeAlias' ON INSERT ONLY NOTNULL '@Node', 'Manager' = '@Manager' ON INSERT ONLY, 'Agent' = '@Agent' ON INSERT ONLY, 'AlertGroup' = '@AlertGroup' ON INSERT ONLY, 'AlertKey' = '@AlertKey' ON INSERT ONLY, 'Severity' = '@Severity', 'Summary' = '@Summary', 'StateChange' = '@StateChange', 'FirstOccurrence' = '@FirstOccurrence' ON INSERT ONLY, 'LastOccurrence' = '@LastOccurrence', 'InternalLast' = '@InternalLast', 'Poll' = '@Poll' ON INSERT ONLY, 'Type' = '@Type' ON INSERT ONLY, 'Tally' = '@Tally',

202 IBM Netcool Operations Insight: Integration Guide 'ProbeSubSecondId' = '@ProbeSubSecondId', 'Class' = '@Class' ON INSERT ONLY, 'Grade' = '@Grade' ON INSERT ONLY, 'Location' = '@Location' ON INSERT ONLY, 'OwnerUID' = '@OwnerUID', 'OwnerGID' = '@OwnerGID', 'Acknowledged' = '@Acknowledged', 'Flash' = '@Flash', 'EventId' = '@EventId' ON INSERT ONLY, 'ExpireTime' = '@ExpireTime' ON INSERT ONLY, 'ProcessReq' = '@ProcessReq', 'SuppressEscl' = '@SuppressEscl', 'Customer' = '@Customer' ON INSERT ONLY, 'Service' = '@Service' ON INSERT ONLY, 'PhysicalSlot' = '@PhysicalSlot' ON INSERT ONLY, 'PhysicalPort' = '@PhysicalPort' ON INSERT ONLY, 'PhysicalCard' = '@PhysicalCard' ON INSERT ONLY, 'TaskList' = '@TaskList', 'NmosSerial' = '@NmosSerial', 'NmosObjInst' = '@NmosObjInst', 'NmosCauseType' = '@NmosCauseType', 'NmosDomainName' = '@NmosDomainName', 'NmosEntityId' = '@NmosEntityId', 'NmosManagedStatus' = '@NmosManagedStatus', 'NmosEventMap' = '@NmosEventMap', 'LocalNodeAlias' = '@LocalNodeAlias' ON INSERT ONLY, 'LocalPriObj' = '@LocalPriObj' ON INSERT ONLY, 'LocalSecObj' = '@LocalSecObj' ON INSERT ONLY, 'LocalRootObj' = '@LocalRootObj' ON INSERT ONLY, 'RemoteNodeAlias' = '@RemoteNodeAlias' ON INSERT ONLY, 'RemotePriObj' = '@RemotePriObj' ON INSERT ONLY, 'RemoteSecObj' = '@RemoteSecObj' ON INSERT ONLY, 'RemoteRootObj' = '@RemoteRootObj' ON INSERT ONLY, 'X733EventType' = '@X733EventType' ON INSERT ONLY, 'X733ProbableCause' = '@X733ProbableCause' ON INSERT ONLY, 'X733SpecificProb' = '@X733SpecificProb' ON INSERT ONLY, 'X733CorrNotif' = '@X733CorrNotif' ON INSERT ONLY, 'URL' = '@URL' ON INSERT ONLY, 'ExtendedAttr' = '@ExtendedAttr' ON INSERT ONLY, 'CollectionFirst' = '@CollectionFirst' ON INSERT ONLY,

############################################################################## # # CUSTOM alerts.status FIELD MAPPINGS GO HERE # ##############################################################################

##############################################################################

'ServerName' = '@ServerName' ON INSERT ONLY, 'ServerSerial' = '@ServerSerial' ON INSERT ONLY );

CREATE MAPPING JournalMap ( 'KeyField' = TO_STRING(STATUS.SERIAL) + ":" + TO_STRING('@UID') + ":" + TO_STRING('@Chrono') ON INSERT ONLY, 'Serial' = STATUS.SERIAL, 'Chrono' = '@Chrono', 'UID' = TO_INTEGER('@UID'), 'Text1' = '@Text1', 'Text2' = '@Text2', 'Text3' = '@Text3', 'Text4' = '@Text4', 'Text5' = '@Text5', 'Text6' = '@Text6', 'Text7' = '@Text7', 'Text8' = '@Text8', 'Text9' = '@Text9', 'Text10' = '@Text10', 'Text11' = '@Text11', 'Text12' = '@Text12', 'Text13' = '@Text13', 'Text14' = '@Text14', 'Text15' = '@Text15', 'Text16' = '@Text16' );

CREATE MAPPING DetailsMap

Chapter 5. Configuring 203 ( 'KeyField' = '@Identifier' + '####' + TO_STRING('@Sequence') ON INSERT ONLY, 'Identifier' = '@Identifier', 'AttrVal' = '@AttrVal', 'Sequence' = '@Sequence', 'Name' = '@Name', 'Detail' = '@Detail' ); EOF

5. Create a gateway properties file, ICP_GATE.props, by copying the default unidirectional gateway properties file objserv_uni.props.

cp $NCHOME/omnibus/gates/objserv_uni/objserv_uni.props $NCHOME/omnibus/etc/ICP_GATE.props

6. Configure the new gateway properties file, $NCHOME/omnibus/etc/ICP_GATE.props. Set the on- premises ObjectServer AGG_P as the source, set the cloud ObjectServer ICP_AGG_V as the destination, and set the resync type to 'UPDATE', as in the following example:

cat << EOF >> $NCHOME/omnibus/etc/ICP_GATE.props Name : 'ICP_GATE' Gate.MapFile : '$NCHOME/omnibus/etc/ICP_GATE.map' Gate.Reader.TblReplicateDefFile : '$NCHOME/omnibus/etc/ICP_GATE.tblrep.def' Gate.Reader.Server : 'AGG_P' Gate.Writer.Server : 'ICP_AGG_V' Gate.Resync.Type : 'UPDATE' Gate.Resync.LockType : 'NONE' Gate.Writer.Description : 'collection_gate' EOF

7. Start the new gateway with the following command:

$NCHOME/omnibus/bin/nco_g_objserv_uni -propsfile $NCHOME/omnibus/etc/ICP_GATE.props

Configuring a bidirectional gateway

About this task Learn how to configure a bidirectional gateway to create an event feed from an on-premises ObjectServer to a deployment of Netcool Operations Insight on Red Hat OpenShift. An on-premises primary aggregation ObjectServer must exist and be configured according to Installing a primary aggregation ObjectServer .

Procedure 1. Follow steps 1 - 4 in “Configuring a uni-directional gateway” on page 201. 2. Create a gateway properties file, ICP_GATE.props, by copying the default bidirectional gateway properties file objserv_bi.props.

cp $NCHOME/omnibus/gates/objserv_bi/objserv_bi.props $NCHOME/omnibus/etc/ICP_GATE.props

3. Configure the new gateway properties file, $NCHOME/omnibus/etc/ICP_GATE.props. Set the on- premises ObjectServer AGG_P as the source, set the cloud ObjectServer ICP_AGG_V as the destination, and set the resync type to 'TWOWAYUPDATE', as in the following example:

cat << EOF >> $NCHOME/omnibus/etc/ICP_GATE.props Name : 'ICP_GATE' Gate.MapFile : '$OMNIHOME/etc/ICP_GATE.map' Gate.ObjectServerA.Server : 'AGG_P' Gate.ObjectServerA.TblReplicateDefFile : '$OMNIHOME/etc/ICP_GATE.tblrep.def' Gate.ObjectServerB.Server : 'ICP_AGG_V' Gate.ObjectServerB.TblReplicateDefFile : '$OMNIHOME/etc/ICP_GATE.tblrep.def' Gate.Resync.Type : 'TWOWAYUPDATE' Gate.ObjectServerA.Description : 'failover_gate' Gate.ObjectServerB.Description : 'failover_gate'

4. Configure the IDUC hostname.

204 IBM Netcool Operations Insight: Integration Guide ObjectServers in cloud deployments respond to IDUC clients with their local, in cloud hostnames. The /etc/hosts file of the gateway host must be updated to correctly resolve the IDUC host address and enable successful IDUC connections. a) Identify the IDUC hostname of the primary and backup ObjectServers on your cloud deployment with the following commands:

kubectl describe pod custom_resource-ncoprimary-0 | grep IDUC_LISTENING_HOSTNAME kubectl describe pod custom_resource-ncobackup-0 | grep IDUC_LISTENING_HOSTNAME

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). For example:

%> kubectl describe pod noi-ncoprimary-0 | grep IDUC_LISTENING_HOSTNAME NCO_IDUC_LISTENING_HOSTNAME: noi-objserv-agg-primary-nodeport

%> kubectl describe pod noi-ncobackup-0 | grep IDUC_LISTENING_HOSTNAME NCO_IDUC_LISTENING_HOSTNAME: noi-objserv-agg-backup-nodeport

b) Find the release FQDN, as in the following example:

%> helm get values --tls custom_resource | grep fqdn fqdn: mycluster.icp

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). c) Use the host command on the returned FQDN value to find the cluster IP address, as in the following example:

%> host mycluster.icp mycluster.icp has address 1.2.3.4

d) Edit the /etc/hosts file on the gateway host to map the cluster IP address to the cluster FQDN.

sudo cat << EOF >> /etc/hosts 1.2.3.4 noi-objserv-agg-primary-nodeport noi-objserv-agg-backup-nodeport EOF

5. Start the gateway with the following command:

$NCHOME/omnibus/bin/nco_g_objserv_bi -propsfile $NCHOME/omnibus/etc/ICP_GATE.props

Configuring analytics You can switch different analytics algorithms on and off. You can also change settings for active configuration algorithms.

About analytics Read this document to find out more about analytics, including temporal correlation, seasonality, and probable cause.

Temporal correlation Temporal correlation helps you to reduce the noise by grouping events, which share a temporal relationship. Event correlation policies are created, allowing temporal correlation to be applied to subsequent events that match the discovered temporal profile. Click “Administering policies created by analytics” on page 369 to read more about policies and how to review them. Temporal correlation is based on two capabilities: • Temporal grouping: the temporal grouping analytic identifies related events based on their historic co- occurrences. Subsequent events, which match the temporal profile are correlated together. With

Chapter 5. Configuring 205 temporal grouping, you can choose the policy deployment mode that can be Deploy first or Review first. In Deploy first mode, policies are enabled automatically, without the need for manual review. In Review first mode, policies are not enabled until they are manually reviewed and approved. • Temporal patterns: the temporal pattern analytic identifies patterns of behavior among temporal groups, which are similar, but occur on different resources. Subsequent events, which match the pattern and occur on a new, common resource are grouped together. Click “Configure temporal correlation” on page 207 to learn how to configure temporal correlation and choose the policy deployment mode after installation. Click “Displaying analytics details for an event group” on page 480 to see how temporal events groups are displayed to your operations team in the Events page.

Seasonality Seasonal event enrichment helps you to identify events in your environment, which consistently occur within a seasonal time window. The seasonal event analytic identifies these characteristics on based on historical event occurences. Seasonal events are enriched with a seasonal indicator, which displays whether an event occurred in, or outside of, an expected seasonal period. Examples of seasonal time windows include the following: Hour of the day

Between 12:00 and 1:00 pm

Day of the week

On Mondays

Day of the month

On the 3rd of the month

Day of the week at a given hour

On Mondays, between 12:00 and 1:00 pm

Day of the month at a given hour

On the 3rd of the month, between 12:00 and 1:00 pm

Click “Configuring seasonality” on page 207 to learn how to configure seasonality and “Displaying event seasonality” on page 474 to see how seasonal enrichment of events is displayed to your operations team in the Events page.

Probable cause probable cause capability is designed to identify the event with the greatest probability of being the cause of the event group, by analyzing the topological information within the events. Learn more about probable cause as part of the Netcool Operations Insight installation. Click “Configuring probable cause” on page 207 to learn how to configure probable cause and “Displaying probable cause for an event group” on page 479 to see how probable cause data is displayed to your operations team in the Events page.

Topological correlation You can create topology templates to generate defined topologies, which search your topology database for instances that match its conditions. Operators see events that are grouped by topology based on these topology templates.

206 IBM Netcool Operations Insight: Integration Guide Click “Configuring topological correlation” on page 208 to learn how to configure topological correlation and click “Displaying analytics details for an event group” on page 480 to see how topological event groups are displayed to your operations team in the Events page.

Scope-based grouping The scope-based event grouping capability groups events that come from the same place around the same time as it's most likely that these events relate to the same root problem. Operators see events that are grouped by scope based on the scope-based groups that are configured. You can define the scope as any one of the event columns; typical examples of scope are the Node or Location columns. Click “Configuring scope-based grouping” on page 208 to learn how to configure scope-based grouping and “Displaying analytics details for an event group” on page 480 to see how scope-based event groups are displayed to your operations team in the Events page.

Configure temporal correlation Read this document to learn how to configure temporal correlation.

About this task Follow these steps to configure temporal correlation to group events, which occurred together historically.

Procedure 1. Go to the Administration > Analytics configuration to access the analytics settings page. 2. Click Configure on the upper right corner of the Temporal correlation tile to access the temporal correlation configuration page. There are two temporal correlation capabilities to find temporal correlation, Temporal grouping and Temporal patterns. 3. Both capabilities are enabled by default. You can easily disable the capabilities by clicking the green toggle switch. If you disable Temporal grouping , Temporal patterns is automatically disabled too. However, if you disable Temporal patterns you can still have Temporal Grouping enabled. 4. You can also choose the Deploy policies state for Temporal grouping . Click either Deploy first or Review first to choose how policies are deployed. If you select Deploy first, the policies are automatically enabled and are added to the Created by analytics tile under Automations > Policies. If you select, Review first the policies are added to Suggested tile and are enabled only once the administrator decides to activate them. 5. In the Temporal Patterns tile, you also have the option to choose resource attributes within the events which you want to be ignored. The ignored attributes aren't considered by the analytic when generating pattern policies. Port and Type are ignored by default.

Configuring seasonality Follow these steps to learn how to configure seasonality to enrich events which occur during cyclic time periods.

Procedure 1. Go to the Administration > Analytics configuration to access the analytics settings page. 2. Click Configure on the upper right corner of the Seasonality tile to access the seasonality configuration page. 3. The seasonality capability is enabled by default. You can easily disable the capability by clicking the green toggle switch.

Configuring probable cause Read this document to learn more about probable cause. The probable cause capability identifies the event with the greatest probability of being the cause of the event group, by using a combination of text classification and analysis of the topological information within the events. Learn more about probable cause as part of the Netcool Operations Insight installation.

Chapter 5. Configuring 207 The probable cause columns in the Object Server Note: Make sure that the topology management integration is enabled to use the probable cause capability. The probable cause columns in the Object Server are included by default in the Example_IBM_CloudAnalytics view of the Event Viewer. The columns are called "CEAEventScore" and "CEAEventClassification". The value shown in the "CEAEventScore" column is the calculated score for an event that indicates its probability of being the causal event within an event grouping. The "CEAEventClassification" column shows the classification of the event that is used as part of the scoring and can present one of the following labels:

Exception Information ErrorRate Latency Saturation StateChange Traffic Unknown

The properties of probable cause By default, the highest "CEAEventScore" is assigned to the name of the whole group. To disable this feature, you must disable the "CEAUseSummaryMimeChild" property in the master.cea_properties table in the Object Server by using the command:

> update master.properties set IntValue = 0 where Name = 'CEAUseSummaryMimeChild'; > go

You have now disabled the property.

Configuring topological correlation Read this document to find out how to configure topological correlation.

About this task Ensure you have topology management working and enabled to create topology templates and generate defined topologies, which search your topology database for instances that match its conditions. Click https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/Administering/ t_asm_applyingviewtemplates.html to learn how to use topology templates to generate defined topologies. After you enabled correlation for topology templates intopology management, topological groups are automatically created by Netcool Operations Insight and can directly be observed in the Events page.

Configuring scope-based grouping Read this document to learn more about scope-based grouping.

About this task You can group events based on known relationships. Based on the information stored about your infrastructure, you can automatically group events relating to an incident if they have the same scope and occur during the same period of time. Scope-based event grouping is provided as an extension in IBM Tivoli Netcool/OMNIbus. For information about how to configure the scope-based grouping capability click https://www.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/use/omn_con_concept_extsbgeventswithea.html.

208 IBM Netcool Operations Insight: Integration Guide Disabling cloud native analytics Service Monitoring The cloud native analytics self-monitoring service is enabled by default. You can disable this feature by removing the self-monitoring policy.

About this task When enabled, the cloud native analytics self-monitoring policy causes OMNIbus to create an event every minute, with an Identifier field value of Event Analytics Service Monitoring. This heartbeat event follows the usual pathway of an event through the backend services of cloud native analytics. At the end of the pathway, the Grade field value of the event is set to the time at which the event was processed by the cloud native analytics self-monitoring policy. The heartbeat event is visible in the Event Viewer, and when cloud native analytics is functioning correctly the time-stamp value in its Grade field increases every minute. If cloud native analytics self-monitoring is enabled and the time-stamp in the Grade field of the heartbeat event is not incrementing after a few minutes, then there might be a failure in one of the cloud native analytics services. To disable cloud native analytics self-monitoring, use the following procedure.

Procedure 1. Retrieve the password that is required to access the policy registry.

kubectl get secret custom_resource-systemauth-secret -o=jsonpath='{.data.password}'

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). 2. Decode the password.

echo encoded_password | base64 --decode

Where encoded_password is the output from step “1” on page 209. 3. Find the ingress point for the policy registry service. • On Red Hat OpenShift run:

kubectl get routes

The value in the HOSTS/PORT column is the ingress point. 4. Run the following command to list all the policies. Then, find the policy ID of the policy that has a group_id value of self-monitoring.

curl -u system:password --insecure -X GET "https://ingress-point/api/policies/system/v1/ cfd95b7e-3bc7-4006-a4a8-a73a79c71255/policies/system" -H "accept: application/json" -H "Content-Type: application/json"

Where • password is the decoded password output from step “2” on page 209. • ingress-point is the ingress point for the policy registry, as found in step “3” on page 209. 5. Delete the self-monitoring policy with the following command:

curl -u system:password --insecure -X DELETE "https://ingress-point/api/policies/system/v1/ cfd95b7e-3bc7-4006-a4a8-a73a79c71255/policies/batch/system" -H "accept: application/json" -H "Content-Type: application/json" -d "["policyid"]"

Where • password is the decoded password output from step “2” on page 209. • ingress-point is the ingress point for the policy registry, as found in step “3” on page 209. • policy_id is the ID of the self-monitoring policy that you want to delete, as found in step “4” on page 209.

Chapter 5. Configuring 209 Configuring integrations with other products Perform these tasks to configure Cloud-based event sources, connections to support runbook automations, and API keys to support API functions.

Configuring incoming integrations Configure incoming integrations from a wide range of Cloud event sources to provide event data to help monitor your environment.

Before you begin You must already have the event source set up and monitoring your resources, with event information available. Important: If you are configuring incoming integrations for event management in a cloud environment, a TLS certificate for the fully qualified domain name must be obtained from a well known and trusted certificate authority. If a valid signed certificate is not available, please refer to the product documentation of the event source to determine how to configure a self signed certificate.

Normalized event structure Netcool Operations Insight is able to easily ingest data from a wide range of Cloud environments, such as Amazon Web Services, Data dog, and Dynatrace. When ingesting data from Cloud environments, Netcool Operations Insight first converts the payload to a normalized event structure, referred to in a number of the topics in this section of the documentation. Subsequently this normalized event structure is converted to a flat event structure that is compatible with the ObjectServer.

Integrating with Alert Notification This event source is intended to receive alerts previously sent to Alert Notification.

Before you begin When Alert Notification is integrated with Netcool Operations Insight the responses returned are different to those returned for an alert sent to the stand-alone version of Alert Notification. Instead of the "Identifier" and "Short Id" that you receive in a response from Alert Notification, the integration with Netcool Operations Insight returns "deduplicationKey" and "eventid" in the response. The error return codes might also differ between the two products.

About this task Using a webhook URL, alerts previously sent to Alert Notification are sent to Netcool Operations Insight as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Alert Notification tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Use the webhook URL generated above to replace the URLs currently configured to send to your Alert Notification subscriptions. Unlike the Alert Notification Alerts API, no credentials are needed for this webhook. 6. Click Save. 7. To start receiving alerts from Alert Notification, ensure that Enable event management from this source is set to On..

210 IBM Netcool Operations Insight: Integration Guide The following table defines the relationship between Alert Notification attributes and event management attributes.

Table 37. Attribute mapping Alert Notification attribute Normalized attribute Type type.statusOrThreshold Where resource.name and resource.type What summary Source sender.name and sender.sourceId Alert timestamp firstOccurrence Severity severity ApplicationsOrServices and Details event.details URLs URLs type.eventType is set to Alert. eventState is set by a combination of Severity and Type. For example, a Severity = 0 and Type = resolution in Alert Notification would have an eventState = clear in event management. sender.type is set to Alert Notification and sender.displayName is set by the name of the integration assigned in Alert Notification. Note: Alert Notification attributes EmailMessageToSend, SMSMessageToSend, and VoiceMessageToSend are not currently mapped. The following example payloads are an alert from Alert Notification received as a normalized event.

Alert Notification alert

{ "What": "ANS FVT NORMALIZER TEST ALERT 1", "Where": "ANS Normalizer Alert 1 ", "Severity": 6, "Type": "Problem", "Source": "sourcetest", "ApplicationsOrServices": [ "AppServ1","AppServ2" ], "URLs": [ { "Description": "my slack url", "URL": "https://alertnotify.slack.com/messages/@slackbot/" }, { "Description": "my yahoo mail", "URL": "https://mail.yahoo.com" } ], "Details": [ { "Name": "detailname1", "Value": "value1" }, { "Name": "detailname2", "Value": "value2" } ], "EmailMessageToSend": { "Subject": "Subject test", "Body": "Body test" }, "SMSMessageToSend": "SMS test",

Chapter 5. Configuring 211 "VoiceMessageToSend": "Voice test" }

Normalized event

[{ "deduplicationKey": "29365ff68f81109883bc9d548cdfa0f7", "displayId": "gnpm-l71l", "eventState": "open", "firstOccurrence": "2019-01-31T16:32:02.709Z", "flapping": false, "incidentUuid": "bd8a0050-2575-11e9-b5c0-edde14a12067", "instanceUuid": "bd83e5d0-2575-11e9-ae58-1a73cc97bd4b", "lastOccurrence": "2019-01-31T16:32:02.709Z", "priority": 5, "resource": { "name": "ANS Normalizer Alert 1 1548952340041", "type": "ANS Normalizer Alert 1 1548952340041" }, "sender": { "type": "Alert Notification", "name": "sourcetest", "sourceId": "sourcetest", "displayName": "ANS_Automation_1548952340041" }, "severity": 5, "severity10": 60, "details": { "ApplicationOrService0": "AppServ1", "ApplicationOrService1": "AppServ2", "detailname1": "value1", "detailname2": "value2" "summary": "ANS FVT NORMALIZER TEST ALERT 1", "type": { "eventType": "Alert", "statusOrThreshold": "Problem" }, "urls": [{ "url": "https://alertnotify.slack.com/messages/@slackbot/", "description": "my slack url" }, { "url": "https://mail.yahoo.com", "description": "my yahoo mail" }] }]

Configuring Amazon Web Services (AWS) as an event source Amazon Simple Notification Service (SNS) is a provided by Amazon Web Services (AWS) that enables applications, end-users, and devices to instantly send and receive notifications from the cloud. You can set up an integration with Netcool Operations Insight to receive notifications from AWS.

Before you begin The following event types are supported for this integration: • Amazon CloudWatch Alarms • Amazon CloudWatch Events • Amazon CloudWatch Logs

About this task For more information about Amazon SNS, see https://aws.amazon.com/documentation/sns/. Using a webhook URL, alerts generated by AWS monitoring are sent to the event management service as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Amazon Web Services tile and click Configure.

212 IBM Netcool Operations Insight: Integration Guide 4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Log in to your Amazon Web Services account at https://us-west-2.console.aws.amazon.com/sns/v2/ home?region=us-west-2#/topics 7. Click Create new topic, provide a topic name, and click Create topic. 8. Go to the ARN column in the table and click the link for your topic. 9. Click Create subscription and set the fields as follows: a) Select HTTPS from the Protocol list. b) Paste the webhook URL into the Endpoint field. This is the generated URL provided by event management. c) Click Create subscription. 10. Configure your AWS alarms to send notifications to the Amazon SNS topic you created. The Amazon SNS topic is then used to forward the notification as events to event management. For example, you can use Amazon CloudWatch alarms to monitor metrics and send notifications to topics as described in http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html. 11. To start receiving alert information from AWS, ensure that Enable event management from this source is set to On..

Configuring AppDynamics as an event source AppDynamics provides application performance and availability monitoring. You can set up an integration with Netcool Operations Insight to receive alert information from AppDynamics.

Before you begin The following event types are supported for this integration: • Application monitoring • End-User Monitoring (RUM: Mobile and Browser) • Database visibility • Infrastructure/Server visibility

About this task Using a webhook URL, you set up an integration with AppDynamics, and create customized HTTP request templates to post alert information to event management based on trigger conditions set in actions as set in AppDynamics policies. The alerts generated by the triggers are sent to the event management service as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the AppDynamics tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Log in to your account at https://www.appdynamics.com/. 7. Create a new HTTP request template:

Chapter 5. Configuring 213 a) Click the Alert & Respond tab. b) Click HTTP Request Templates in the menu bar on the left, and click New to add a new template. c) Enter a name for the template. d) In the Request URL section, select POST from the Method list, and paste the webhook URL from event management in the Raw URL field. e) In the Payload section, select application/json from the MIME Type list, and paste the following text in the field:

{ "controllerUrl": "${controllerUrl}", "accountId": "${account.id}", "accountName": "${account.name}", "policy": "${policy.name}", "action": "${action.name}", #if(${notes}) "notes": "${notes}", #end "topSeverity": "${topSeverity}", "eventType": "${latestEvent.eventType}", "eventId": "${latestEvent.id}", "eventGuid": "${latestEvent.guid}", "displayName": "${latestEvent.displayName}", "eventTime": "${latestEvent.eventTime}", "severity": "${latestEvent.severity}", "applicationName": "${latestEvent.application.name}", "applicationId": "${latestEvent.application.id}", "tier": "${latestEvent.tier.name}", "node": "${latestEvent.node.name}", #if(${latestEvent.db.name}) "db": "${latestEvent.db.name}", #end #if(${latestEvent.healthRule.name}) "healthRule": "${latestEvent.healthRule.name}", #end #if(${latestEvent.incident.name}) "incident": "${latestEvent.incident.name}", #end "affectedEntities": [ #foreach($entity in ${latestEvent.affectedEntities}) { "entityType": "${entity.entityType}", "name": "${entity.name}" } #if($foreach.hasNext), #end #end ], "deepLink": "${latestEvent.deepLink}", "summaryMessage": "$!{latestEvent.summaryMessage.replace('"','')}", "eventMessage": "$!{latestEvent.eventMessage.replace('"','')}", "healthRuleEvent": ${latestEvent.healthRuleEvent}, "healthRuleViolationEvent": ${latestEvent.healthRuleViolationEvent}, "btPerformanceEvent": ${latestEvent.btPerformanceEvent}, "eventTypeKey": "${latestEvent.eventTypeKey}" }

f) In the Response Handling Criteria section, under Success Criteria, click Add Success Criteria, and select 200 from the Status Code list. g) In the Settings section, select the Check One Request Per Event check box. h) Click Save. 8. Test your new template. Click Test, then click Add Event Type, and select an event type. Click Run Test. Sample test events are generated and correlated into an incident in event management. To view the incident and its events, go to the Incidents tab on the event management UI, click the All incidents list, and look for incidents that have a description containing Cluster: Sample tier. The event information for these incidents have the event source type set to AppDynamics. The event information is available by clicking Events on the incident bar, and then clicking the See more info button to access all details available for the selected event. 9. Create a new action and add your new template to the action: a) Click Actions in the menu bar on the left, and click Create Action to add a new template. b) Select the Make an HTTP Request radio button, and click OK.

214 IBM Netcool Operations Insight: Integration Guide c) Enter a name for the action and select the template you created from the HTTP Request Template list. d) Click Save. 10. Add the new action to your AppDynamics policies: a) Click Policies in the menu bar on the left, and click Create Policy to add a new policy, or click Edit to edit an existing policy. b) Click Trigger in the menu bar on the left, and select the check box for the events that you want to have alerts triggered as part of this policy. The events you select depend on your environment and requirements. For example, you can select all the Health Rule Violation events. c) Click Actions in the menu bar on the left, and click Add. d) Select Make an HTTP Request from the list and click Select. e) Click Save. 11. To start receiving alert information from the AppDynamics policies based on trigger conditions, ensure that Enable event management from this source is set to On..

Configuring Datadog as an event source Datadog provides a monitoring service for your cloud infrastructure. You can set up an integration with Netcool Operations Insight to receive alert information from Datadog.

Before you begin The following event types are supported for this integration: • service_check – Host - Check alert (event per host) – Host - Cluster alert (event per cluster/group) – Process - Check alert (event per host, per process) – Process - Cluster alert (event per process) – Network service - Check alert (event per host, per service) – Network service - Cluster alert (event per network) • metric_alert_monitor (over single host) • metric_alert_monitor (over cluster/tag) • query_alert_monitor (complex query when configure Metric alert) Log management events from Datadog are not currently supported.

About this task Using a webhook URL, alerts generated by Datadog monitors are sent to the event management service as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Datadog tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Log in to your account at http://www.datadoghq.com. 7. Click Integrations in the navigation menu.

Chapter 5. Configuring 215 8. Go to the webhooks tile and click Install, or click Configure if you already have other webhooks set up. 9. Click the Configuration tab, and add a name for the webhook integration in the first available field of the Name and URL section at the bottom of the form. 10. Paste the webhook URL into the second field. This is the field after the one where you added the name. This is the generated URL provided by event management. 11. Click Install Integration or Update Configuration, and close the window. 12. Set the webhook for each monitor you want to receive alerts from as follows: a) Click Monitors > Manage Monitors in the navigation menu on the left side of the window. b) For existing monitors, hover over the monitor you want to receive alerts from and click Edit, or click New Monitor if you are setting up a new monitor. c) Go to the Say what's happening section and ensure you enter a title for your events in the header text field. For Cluster Alerts, enter a title that includes the following: [Cluster: resource_monitored]. Enter the title in the following format: Title text [Cluster: resource monitored] For example: Some of [Cluster: http_service on redhat] is down. This title is required for the correlation of your Datadog events into incidents. d) Go to the main body text field of the Say what's happening section, and type @. The available webhook names are listed. Select the name of your webhook integration. The name is also added to the Notify your team section. Tip: You can also select your webhook name from the drop-down list in the Notify your team section. You can also select users to notify. The selected webhook and users are added to the message in the Say what's happening section. e) Click Save. f) Repeat these steps for each monitor you want to receive alerts from. 13. To start receiving alert information from the Datadog monitors, ensure that Enable event management from this source is set to On..

Configuring Dynatrace as an event source Dynatrace provides application performance monitoring. You can set up an integration with Netcool Operations Insight to receive problem notifications from Dynatrace.

Before you begin The following event types are supported for this integration: • Applications • Synthetic (browser/availability) • Transactions and services • Databases • Hosts • Network

About this task Use a webhook URL and a custom payload to set up the integration between Dynatrace and event management.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration.

216 IBM Netcool Operations Insight: Integration Guide 3. Go to the Dynatrace tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file.

5. Go to step 3 and click Copy to add the custom payload to the clipboard. Ensure you save the custom payload to make it available later in the configuration process. For example, you can save it to a file. 6. Click Save. 7. Log in to your account at https://www.dynatrace.com/ and set up a custom integration: a) Go to Settings > Integration > Problem notifications. b) Click Set up notifications, and select Custom integration. c) On the Set up custom integration page, paste the webhook URL from event management in the Webhook URL field. d) Paste the custom payload from event management in the Custom payload field. e) Click Save. For more information about setting up custom integrations in Dynatrace, see https:// www.dynatrace.com/support/help/problem-detection/problem-notification/how-can-i-set-up- outgoing-problem-notifications-using-a-webhook/. 8. Set the alerting rules for Availability, Error, Slowdown, Resource and Custom alerts in Dynatrace as described in https://www.dynatrace.com/support/help/problem-detection/problem-notification/ how-can-i-filter-problem-notifications-with-alerting-profiles/. The alerting rules determine what problem notifications are sent to event management as events. 9. Set the anomaly detection sensitivity for infrastructure components in Dynatrace as described in https://www.dynatrace.com/support/help/problem-detection/anomaly-detection/how-do-i-adjust- anomaly-detection-for-infrastructure-components/. The detection sensitivity and alert thresholds determine what problem notifications are sent to Netcool Operations Insight as events. 10. To start receiving problem notifications as events from Dynatrace, ensure that Enable event management from this source is set to On..

Configuring Elasticsearch as an event source Elasticsearch is a distributed, RESTful search and analytics engine that stores data as part of the Elastic Stack. You can set up an integration with Elasticsearch to send log information to Netcool Operations Insight as events.

Before you begin The Elasticsearch event source is only supported when event management is deployed in an IBM Cloud Private environment. Ensure you have the X-Pack extension for the Elastic Stack installed as described in https:// www.elastic.co/guide/en/x-pack/current/installing-xpack.html. The following event types are supported for this integration: • X-Pack Alerting

About this task Using the X-Pack Alerting (via Watcher) feature, you configure watches to send event information to event management. For information about X-Pack Alerting via Watcher, see https://www.elastic.co/guide/en/x- pack/current/how-watcher-works.html.

Chapter 5. Configuring 217 Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Elasticsearch tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Configure the X-Pack watcher feature in Elasticsearch to forward events to event management. For example, to configure the watcher using the Kibana UI: a) Log in to the Kibana UI and to access the Watcher UI as described in https://www.elastic.co/ guide/en/kibana/7.4/watcher-ui.html#watcher-getting-started. If you are using IBM Cloud Private, you can configure the included Elasticsearch engine to send events to event management. You can open the Kibana UI from the navigation menu in IBM Cloud Private by clicking Network Access > Services > Kibana, or by clicking Platform > Logging. Note: Ensure you have Kibana installed in IBM Cloud Private as described in https://www.ibm.com/ support/knowledgecenter/en/SSBS6K_2.1.0.3/featured_applications/kibana_service.html. b) Create a new advanced watch as described in https://www.elastic.co/guide/en/kibana/7.4/ watcher-ui.html#watcher-create-advanced-watch. Update the fields as follows: • Enter an ID and name. • Configure your watch definition based on your requirements and add it to the Watch JSON field. For more information, see https://www.elastic.co/guide/en/x-pack/6.2/how-watcher- works.html#watch-definition. • Paste the webhook URL from event management in the url field under the actions settings. The following is an example watch definition for IBM Cloud Private environments where the watch is triggered every 5 minutes to load the Logstash logs that were written in the last 5 minutes and contain any of the following keywords: failed, error, or warning. The watcher posts the payload for such logs to event management using the webhook URL.

{ "trigger": { "schedule": { "interval": "5m" } }, "input": { "search": { "request": { "indices": [ "logstash-2018*" ], "body": { "query": { "bool": { "must_not": { "match": { "kubernetes.container_name": "custom-metrics-adapter" } }, "filter": [ { "range": { "@timestamp": { "gte": "now-5m" } } }, { "terms": { "log": [ "failed",

218 IBM Netcool Operations Insight: Integration Guide "error", "warning" ] } } ] } } } } } }, "actions": { "my_webhook": { "webhook": { "method": "POST", "headers": { "Content-Type": "application/json" }, "url": "", "body": "{{#toJson}}ctx.payload{{/toJson}}" } } } }

Important: Ensure you set the trigger for the watch to a frequency that suits your requirements for monitoring the logs. Consider the load on the system when setting frequency. In the previous example, the watch is triggered every 5 minutes to load the logs that were written in the last 5 minutes using the "schedule": {"interval": "5m"} and "@timestamp": {"gte": "now-5m" } settings. If you set interval to less than 5 minutes in this case, then the same logs are sent to event management more than once, repeating event data in the correlated incidents. Restriction: The "terms": {"log": []} section in the watch definition determines the mapping to the event severity levels in event management. The default values are "failed", "error", and "warning", and are mapped to "critical", "major", and "minor" severity levels. If you use any other value, the event severity is mapped to "indeterminate" in event management.

Attention: In IBM Cloud Private environments ensure you exclude "kubernetes.container_name": "custom-metrics-adapter" from your watch definition using the following setting:

"must_not": { "match": { "kubernetes.container_name": "custom-metrics-adapter" }

The size of the custom-metric-adapter logs can be large and overload the event management processing. In addition, the log format is unreadable to users. c) Save the watch. 7. If you are using IBM Cloud Private, ensure the X-Pack watcher feature is enabled; for example: a) Load the ELK (Elasticsearch, Logstash, Kibana) stack ConfigMap into a file using the following command: kubectl get configmaps logging-elk-elasticsearch-config --namespace=kube- system -o yaml > elasticsearch-config.yaml b) Edit the elasticsearch-config.yaml file to enable the watcher: xpack.watcher.enabled: true c) Save the file, and replace the ConfigMap using the following command: kubectl --namespace kube-system replace -f elasticsearch-config.yaml d) Restart Elasticsearch and Kibana. 8. To start receiving log information as events from Elasticsearch, ensure that Enable event management from this source is set to On..

Chapter 5. Configuring 219 Configuring IBM UrbanCode Deploy as an event source You can set up an integration with Netcool Operations Insight to receive notifications created by IBM UrbanCode Deploy. IBM UrbanCode Deploy is a tool for automating application deployments through your environments. It facilitates rapid feedback and continuous delivery in agile development, while providing the audit trails, versioning, and approvals needed in production. Emails are sent to Netcool Operations Insight as events.

Before you begin You must have a docker account. The following event types are supported for this integration: • IBM UrbanCode Deploy email notifications

About this task IBM UrbanCode Deploy sends email notifications when user-defined trigger events occur on the server. You must configure the email probe container to retrieve emails from the email account and perform the normalization. After the normalization, the probe will send the events to event management.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the IBM UrbanCode Deploy tile and click Configure. 4. Enter a name for the integration. 5. Click Download file to download and decompress the email-probe-package.zip file. Important: The download file contains credential information and should be stored in a secure location. 6. Extract the package into a docker environment where docker and docker compose are installed. a) In the location where you extracted the download package, identify the file interfaces.http. b) Edit interfaces.http and find the line that reads as follows:

"hostname" : "netcool.release_name.ingress_fqdn/normlundefined"

c) Modify this line to read as follows:

"hostname" : "netcool.release_name.ingress_fqdn/norml"

7. Grant execution rights to integration.sh, for example chmod 755 integration.sh. 8. Go to https://store.docker.com/images/ibm-netcool-probe-email to read the description and then click Proceed to checkout on the right of the page. Enter the required contact information and click Get Content. 9. Run docker login in your docker environment. 10. Uncomment LICENSE=accept in probe.env to accept the license agreement. 11. Update probe.env to populate EMAIL_SERVER_HOSTNAME, USERNAME, PASSWORD, and FILTER. • EMAIL_SERVER_HOSTNAME is used to specify the email server host name, such as gmail.com. • USERNAME is used to specify the user name of the email account. • PASSWORD is used to specify the plain password to access the email account. The plain password will be encrypted when the container is running. Note: do not set a password that starts with ENCRYPTED as this keyword is used to determine whether it is a plain or encrypted password. • FILTER is used to specify the UCD sender email address. The sender email address can be found in UCD Home > Settings > System Settings > Mail Server Settings > Mail Server Sender Address. • Optionally, you can update POLLINTERVAL to specify the frequency in seconds for the probe to retrieve new emails. The default value is 600 seconds.

220 IBM Netcool Operations Insight: Integration Guide 12. Run docker-compose up to start the probe. Note: a. The probe only connects to the IMAP mail server over a TLS connection so the email server must have a valid certificate. b. The probe only supports the default UCD notification template. c. The probe deletes emails from the mail server after retrieving them. d. For the probe to run smoothly, avoid updating other probe properties in email.props. Unsupported Events There are four mandatory normalized fields required to publish an event in Netcool Operations Insight. The attributes are Resource Name, Summary, Event Type and Severity. If an Unknown - error is displayed for any of these fields, you will need to update the UCD email notification template. This might happen if you have used a custom notification template. The following table contains some error messages, possible causes and resolutions.

Table 38. Mandatory CEM field error messages Normalized Field Messages Possible Causes Resolutions Summary Unknown - Missing the Missing the Subject Update the notification Subject field in UCD field in UCD email. template to add email. Subject field. Resource Name Unknown - Missing the The Application field in Follow the exact format expected format of the UCD email is not used in the default Application name in following the format in notification templates. UCD email. default email notification template. Resource Name Unknown - Missing the The Process field in the Follow the exact format expected format of UCD email is not used in the default Process name in UCD following the format in notification templates. email. default email notification template. Resource Name Unknown - Missing the Missing the Application Update the notification Application or Process or Process field in UCD template to add name in UCD email. email. Application or Process field. Event Type Unknown - Missing the Missing the keyword of Update the notification keyword of Process or Process or Approval at template to add Approval at the Subject the Subject field in UCD keyword of Process or field in UCD email to email to indicate the Approval at the Subject indicate the event type. event type. field. 13. To start receiving alert notifications from IBM UrbanCode Deploy, ensure that Enable event management from this source is set to On.. If you feel that the current webhook URL for the email probe has been compromised in some way, you can download the email probe zip file again to regenerate the webhook. This invalidates the existing webhook URL and replaces it with a new one. In this scenario, you must repeat the configuration steps to save the zip file in a docker environment and rerun docker compose to start the email probe with new webhook.

Chapter 5. Configuring 221 Configuring Jenkins as an event source Jenkins helps automate software development processes such as builds to allow continuous integration. You can set up an integration with Netcool Operations Insight to receive notifications about jobs from Jenkins projects.

Before you begin

If your Netcool Operations Insight installation is using a certificate authority (CA) that is not well known, then you will need to ensure that your CA is trusted by Jenkins. Complete these steps to convert your CA certificate into the correct format and import it into the Jenkins trust store. 1. Run the following command:

openssl pkcs7 -in cert.pem -out cert.crt -print_certs

2. Import your certificate to the JVM keystore as a trusted certificate:

keytool -storepass -import -noprompt -trustcacerts -alias -keystore cacerts -file cert.crt

3. Restart your Jenkins server process to pick up the new certificate. 4. Ensure your Jenkins server host can resolve the domain name of your Cloud Event Management installation. 5. Modify the DNS server or add the host and domain name to the hosts file.

About this task Notifications can be sent for single job stages or all stages of a job. Configure each project separately from which you want to receive notifications. The notifications are raised in event management as events. The events are then correlated into incidents. Important: The Jenkins server needs the Notification Plug-in to send the notifications.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Jenkins tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Log into your Jenkins server as administrator. 7. Ensure that the Notification Plug-in is installed on your Jenkins server. Tip: Check first whether the plug-in is installed by clicking Jenkins > Manage Jenkins > Manage Plugins . Go to the Installed tab and look for the Notification plugin. If not in the list of installed plug- ins, go to the Available tab and search for Notification plugin. Select the check box for the plug-in and click Install. 8. Configure the Jenkins project you want to receive notifications from as follows: a) Click the project name and then click Configure. b) Click the Job Notifications tab, and click Add Endpoint. c) Set up the connection as follows: • Select JSON from the Format list. • Select HTTP from the Protocol list.

222 IBM Netcool Operations Insight: Integration Guide • Select when you want to receive notifications about the job from the Event list. For example, All Events sends a notification for each job phase, while Job Finalized only triggers a notification when the job has completed, including post-build activities. Select All Events to receive detailed information about the jobs. • Paste the webhook URL into the URL field. This is the generated URL provided by event management. • Enter 5 in the Log field. This determines the number of lines to include from the log in the message. d) Click Save e) Repeat the steps for each project you want to receive notification from. 9. To start receiving notifications about Jenkins jobs, ensure that Enable event management from this source is set to On..

Configuring Logstash as an event source When Netcool Operations Insight is deployed in an IBM Cloud Private environment, you can forward log data to Netcool Operations Insight from Logstash.

Before you begin By default, the IBM Cloud Private installer deploys an Elasticsearch, Logstash and Kibana (ELK) stack to collect system logs for the IBM Cloud Private managed services, including Kubernetes and Docker. For more information, see https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.2/ manage_metrics/logging_elk.html The following event types are supported for this integration: • Logs Note: Ensure you meet the prerequisites for IBM Cloud Private, such as installing and configuring the kubectl, the Kubernetes command line tool.

About this task The log data collected and stored by Logstash for your IBM Cloud Private environment can be configured to be forwarded to event management as event information and then correlated into incidents.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Logstash tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Modify the default Logstash configuration in IBM Cloud Private to add event management as a receiver. To do this, edit the Logstash pipeline ConfigMap to add the webhook URL in the output section as follows: a) Load the ConfigMap into a file using the following command: kubectl get configmaps logstash-pipeline --namespace=kube-system -o yaml > logstash-pipeline.yaml Note: The default Logstash deployment ConfigMap name in IBM Cloud Private is logstash- pipeline in the kube-system namespace. If your IBM Cloud Private logging uses a different Logstash deployment, modify the ConfigMap name and namespace as required for that deployment.

Chapter 5. Configuring 223 b) Edit the logstash-pipeline.yaml file and add an HTTP section to specify event management as a destination using the generated webhook URL. Paste the webhook URL into the url field:

output { elasticsearch { index => "logstash-%{+YYYY.MM.dd}" hosts => "elasticsearch:9200" } http { url => "" format => "json" http_method => "post" pool_max_per_route => "5" } }

Note: The pool_max_per_route value is set to 5 by default. It limits the number of concurrent connections to event management to avoid data overload from Logstash. You can modify this setting as required. c) Save the file, and replace the ConfigMap using the following command: kubectl --namespace kube-system replace -f logstash-pipeline.yaml d) Check the update is complete at https://:8443/console/ configuration/configmaps/kube-system/logstash-pipeline Note: It can take up to a minute for the configuration changes to take affect. 7. To start receiving log data from Logstash, ensure that Enable event management from this source is set to On..

Configuring Microsoft Azure as an event source Microsoft Azure provides monitoring services for Azure resources. You can set up an integration with Netcool Operations Insight to receive alert information from Microsoft Azure.

Before you begin The following event types are supported for this integration: • Azure Classic Metrics and Azure Metrics • Azure Activity Log Alerts • Azure Auto-shutdown notification • Azure Auto-scales notification • Azure Log Search Alerts:

About this task • Using a webhook URL, alerts generated by Microsoft Azure monitoring are sent to the event management service as events. • No expiry time is set for Azure Log Alerts in event management.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Microsoft Azure tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Log in to your Microsoft Azure account at https://portal.azure.com/.

224 IBM Netcool Operations Insight: Integration Guide 7. Go to the Dashboard and select the resource you want event information from. Click the resource name. 8. Go to MONITORING in the navigation menu and click Alerts. 9. Click New alert rule at the top of the page. 10. Set up the rule as follows: Remember: For Azure Log Search Alerts select Application Insight or Log Analytics Workspaces under the Resource section. a) Enter a name for the rule and add a description. b) Select the metric that you want this alert rule to monitor for the selected resource. c) Set a condition and enter a threshold value for the metric. When the threshold value for the set condition is reached, an alert is generated and sent as an event to event management. d) Select the time period to monitor the metric data. e) Optional: Set up email notification. f) Paste the webhook URL into the Webhook field. This is the generated URL provided by event management. g) Click OK. 11. To start receiving alert information from Microsoft Azure, ensure that Enable event management from this source is set to On..

Configuring Microsoft System Center Operations Manager as an event source System Center Operations Manager (SCOM) is a cross-platform data center monitoring system for operating systems and hypervisors. You can set up an integration with Netcool Operations Insight to receive notifications created by SCOM.

About this task Download the integration package from event management and import the scripts into your SCOM server. Sample commands are provided for Windows operating systems. Copy the notification script scom- cem.ps1 to any accessible directory on your SCOM server. A channel, a subscriber, and a subscription are required in SCOM to use the scom-cem.ps1 script to forward notifications to event management. The default resource type is "Server". "Database", "Application", and "Service" are also supported.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Microsoft System Center Operations Manager tile and click Configure. 4. Enter a name for the integration. 5. Click Download file to download the scom-cem.ps1 script file. 6. Copy the script to any accessible directory on your SCOM server. Use the following command (on Windows) to copy scom-cem.ps1 to any directory on your SCOM server:

copy scom-cem.ps1 C:\\

Replacing with your chosen directory. 7. To prevent malicious scripts from running on your machine, Windows prevents downloaded internet files from being runnable. Complete these steps to unblock the file: a) Browse to the scom-cem.ps1 file using Windows Explorer. b) Right-click the file and select Properties. c) On the General tab, under Security, click the Unblock check box. d) Click OK.

Chapter 5. Configuring 225 8. Edit the scom-cem.ps1 script file and locate the following line:

Import-Module "C:\Program Files\Microsoft System Center 2016\Operations Manager\Powershell \OperationsManager\OperationsManager.psm1"

Replace the file path with the location of OperationsManager.psm1 on your environment. 9. Open the SCOM Operations Console to create a command channel, subscriber, and subscription for event management to integrate with the scom-cem.ps1 script file. To create a command channel: a) In the SCOM Console, go to Administrator > Notifications > Channels > New > Command. b) Enter a name for the channel and click Next. c) The following sample input is entered on the Settings tab: Full path of the command line:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

Command line parameters:

"C:\cem-scom\scom-cem.ps1" -AlertID "$Data[Default='NotPresent']/Context/DataItem/AlertId $" -CreatedByMonitor "$Data[Default='NotPresent']/Context/DataItem/CreatedByMonitor$" - ManagedEntitySource "$Data[Default='NotPresent']/Context/DataItem/ ManagedEntityDisplayName$" -WorkflowId "$Data[Default='NotPresent']/Context/DataItem/ WorkflowId$" -DataItemCreateTimeLocal "$Data[Default='NotPresent']/Context/DataItem/ DataItemCreateTimeLocal$" -ManagedEntityPath "$Data[Default='NotPresent']/Context/ DataItem/ManagedEntityPath$" -ManagedEntity "$Data[Default='NotPresent']/Context/ DataItem/ManagedEntity$" -MPElement "$MPElement$"

Startup folder for the command line:

C:\Windows\System32\WindowsPowerShell\v1.0\

d) Click Finish and Close. 10. Create a notification subscriber and subscription in Microsoft System Center Operations Manager for event management. In the SCOM Console, go to Administrator > Notifications > Subscribers > New. For new subscriptions click Administrator > Notifications > Subscription > New. 11. Save the integration in event management. To start receiving notifications from Microsoft System Center Operations Manager, ensure that Enable event management from this source is set to On..

What to do next Because the SCOM notification does not track the script execution history, a log has been added to the powershell script to track if the script has been called and executed successfully. Therefore, you must create a tmp directory and log files on the SCOM server for the script to write to. Complete the following steps to create a tmp directory and log files on the SCOM server: 1. By default, the powershell script writes the log to C:\tmp\postResultCEM.text. 2. Create the C:\tmp directory. If you want to set a different directory, change the following line in scom-cem.ps1:

## Temp directory to capture the raw event $tmpdir = "C:\tmp"

3. Create the C:\tmp\postResultCEM.txt file. If you want to set a different file, change the following line in scom-cem.ps1:

$postFile = "$tmpdir\postResultCEM.txt"

4. Add read and execute permission for directory and file that you created in steps 2 and 3.

226 IBM Netcool Operations Insight: Integration Guide 5. If you want to disable logs, make the following changes in scom-cem.ps1:

## Comment out the two lines below: #$postFile = "$tmpdir\postResultCEM.txt" #Add-content $postFile -value $json

## Change the following line as below: Invoke-RestMethod -Verbose -Method Post -ContentType "application/json" -Body $json -Uri $Url

Configuring Nagios XI as an event source Nagios XI provides network monitoring products. You can set up an in integration with Netcool Operations Insight to receive alert information from Nagios XI products.

Before you begin The following event types are supported for this integration: • Service Notifications • Host Notifications

About this task Using a package of configuration files provided by event management, you set up an integration with Nagios XI. The alerts generated by Nagios XI are sent to the event management service as events. Note: Event management supports integration with the server monitoring and web monitoring components of the Nagios XI product.

Procedure 1. Ensure that the Nagios Plugins are installed into your instance of Nagios XI. Depending on how the plugins are controlled, you can check their status as follows: • If you use xinetd for controlling the plugins: service xinetd status • If you use a dedicated daemon for controlling the plugins:service nrpe status 2. Click Administration > Integrations with other systems. 3. Click New integration. 4. Go to the Nagios XI tile and click Configure. 5. Enter a name for the integration. 6. Click Download file to download the nagios-cem.zip file. The compressed file contains three files to set up the integration with event management: • The file cem.cfg needs to be imported into Nagios XI. • The file nagios-cem-webhook.sh includes the unique webhook URL generated for this integration. • The file import-cem.sh copies the cem.cfg and nagios-cem-webhook.sh files to Nagios XI destination directory. Important: The download file contains credential information and should be stored in a secure location. 7. Click Save to save the integration in event management. 8. Extract the files to any directory, and copy the files to the Nagios XI server. 9. Run the import-cem.sh command to copy the cem.cfg and nagios-cem-webhook.sh files to the correct Nagios XI destination directory. For example, if you are logged in as a non-root user, run the command as follows to ensure it runs as root and copies the files as required: sudo bash ./import-cem.sh. 10. Log in to the Nagios XI UI as an administrator, and use the Core Config Manager to import the cem.cfg file:

Chapter 5. Configuring 227 a) Go to Configure in the menu bar at the top of the window and select Core Config Manager from the list. b) Select Tools > Import Config Files from the menu on the left side of the window. c) Select cem.cfg and click Import. 11. Enable the environment variable macro: a) In Core Config Manager, select CCM Admin > Core Configs from the menu on the left side of the window. b) On the General tab enter 1 for the enable_environment_macros parameter. c) Click Save Changes. 12. Ensure the cemwebhook contact is added to the set of hosts and services you monitor: Note: Remember to enable the cemwebhook contact when setting up a source to monitor. To enable the cemwebhook contact for the host and all services for that host, ensure you select CEM Webhook- Contact under Send Alert notification To in Step 4 of the Configuration Wizard. To check that cemwebhook is among the contacts included in alerts for a host: a) In Core Config Manager, select Monitoring > Hosts from the menu on the left side of the window. b) Click a host name to edit its settings. c) Click the Alert Settings tab and then click Manage Contacts. d) Ensure that cemwebhook is in the Assigned column. If not, then select it and click Add Selected. e) Click Close and then Save. Note: This example is for checking host settings, but the same steps can be followed to check services. 13. Change the command type for the notify-cem-host and notify-cem-service commands: a) In Core Config Manager, select Commands > _Commands from the menu on the left side of the window. b) Locate and click notify-cem-host to edit its settings. c) Select misc command from the Command Type list. d) Click Save. e) Repeat for notify-cem-service. 14. Select Quick Tools > Apply Configuration from the menu on the left side of the window and click Apply Configuration. 15. To start receiving alert information from Nagios XI, ensure that Enable event management from this source is set to On..

Configuring New Relic as an event source New Relic monitors mobile and web applications in real-time, helping users diagnose and fix application performance problems. You can receive New Relic alerts through the incoming webhooks of Netcool Operations Insight.

Before you begin The following event types are supported for this integration: • APM • Servers • Plugins • Synthetics • Infrastructure • Browser

228 IBM Netcool Operations Insight: Integration Guide About this task You can configure integration with both New Relic Legacy or New Relic Alerts systems. Both configuration procedures are documented here. The first step is to generate the webhook URL within event management.

Procedure 1. Generate an incoming webhook for New Relic: a) Click Administration > Integrations with other systems. b) Click New integration. c) Depending on the version you use, go to the New Relic Legacy or New Relic Alerts tile, and click Configure.

d) Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. e) Click Save. 2. Use the incoming webhook to: • “Configure New Relic Legacy” on page 230 as source. • “Configure New Relic Alerts” on page 229 as source.

Configure New Relic Alerts Configure integration with New Relic Alerts.

About this task Configure New Relic Alerts as source:

Procedure 1. Generate an incoming webhook as described in “1” on page 229. 2. Log in to New Relic at https://alerts.newrelic.com/ as an administrator. 3. From the New Relic menu bar, select Alerts > Notification channels. 4. Click New notification channel. 5. In the Channel details section, select Webhook for channel type. 6. Enter a name for the channel and paste the webhook URL into the Base URL field. This is the generated URL provided by event management. 7. Click Create channel. 8. Associate the webhook channel with all of the New Relic policies that you want to receive events from. For more information about associating channels with policies, see the New Relic documentation at https://docs.newrelic.com/docs/alerts/new-relic-alerts/managing-notification- channels/add-or-remove-policy-channels. 9. Ensure you set the incident preference to By condition and entity. This is required to send notifications to event management every time a policy violation occurs. event management uses this information to accurately correlate events into incidents, and clear them when applicable. a) From the New Relic menu bar, select Alerts > Alert policies. b) Select your alert policy and click Incident preference. c) Select By condition and entity, and click Save. d) Repeat for each alert policy that sends notifications to event management. For more information about incident preferences in New Relic, see https://docs.newrelic.com/ docs/alerts/new-relic-alerts/configuring-alert-policies/specify-when-new-relic-creates-incidents.

Chapter 5. Configuring 229 10. To start receiving events from New Relic, ensure that Enable event management from this source is set to On..

Configure New Relic Legacy Configure integration with New Relic Legacy.

About this task Configure New Relic Legacy as source:

Procedure 1. Generate an incoming webhook as described in “1” on page 229. 2. Log in to New Relic at https://rpm.newrelic.com/ as an administrator. 3. From the New Relic menu bar, select Alerts > Channels and groups. 4. In the Channel details section, click Create channel > Webhook. 5. Enter a name for the channel and paste the incoming webhook URL into the Webhook URL field. This is the generated URL provided by event management. Add an optional description. 6. Select your Notification level. 7. Click Integrate with Webhooks. 8. Associate the webhook channel with all of the New Relic policies that you want to receive events from. For more information about associating channels with policies, see the New Relic documentation at https://docs.newrelic.com/docs/alerts/new-relic-alerts/managing-notification-channels/add-or- remove-policy-channels. 9. To start receiving events from New Relic, ensure that Enable event management from this source is set to On..

Configuring Pingdom as an event source Pingdom provides web performance and availability monitoring. You can set up an integration with Netcool Operations Insight to receive alert information from Pingdom.

About this task Using a webhook URL, you set up an integration with Pingdom, and associate the integration with the uptime and transaction checks. The alerts generated by the checks are sent to the event management service as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Pingdom tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Log in to your account at https://my.pingdom.com/. 7. Set up the integration: a) Select Integrations > Integrations. b) Click Add new in the upper-right corner of the window. c) Ensure Webhook is selected from the Type list. d) In the Name field, enter a name for the integration.

230 IBM Netcool Operations Insight: Integration Guide e) In the URL field, paste the webhook URL from event management. f) Ensure the Active check box is selected. g) Click Save integration. Tip: For more information about setting up webhook integrations in Pingdom, see https:// help.pingdom.com/hc/en-us/articles/207081599. 8. Enable the integration for the checks you want to receive alert information from: a) Go to https://my.pingdom.com/dashboard. b) Select Montioring > Uptime. c) Open a check, and select the check box next to your webhook integration. This enables the posting of alerts to the URL when, for example, a site goes down. Tip: If you don't have checks set up, you can add them by clicking Add new in the upper-right corner of the window. For more information about checks in Pingdom and how to set them up, see https://help.pingdom.com/hc/en-us/articles/203749792-What-is-a-check-. d) Repeat the steps for each check you want to receive alert information from. 9. To start receiving alert information from the Pingdom checks, ensure that Enable event management from this source is set to On..

Configuring Prometheus as an event source Prometheus is an open-source systems monitoring and alerting toolkit. You can set up an integration with Netcool Operations Insight to receive alert information from Prometheus.

About this task For information about configuring the Prometheus server parameters, see Configuring the Prometheus server in the IBM Cloud Private Knowledge Center.

Configuring Sensu as an event source You can set up an integration with Netcool Operations Insight to receive notifications created by Sensu. Sensu can monitor servers, services, application health, and business KPIs.

Before you begin The following event types are supported for this integration: • event management supports the default Sensu events. The use of mutators is not recommended. • Sensu Core 2.0 Beta is not supported. The following criteria applies to configuring Sensu as an event source: • The CEM event handler plugin must be installed in the location from where it will be used to send events to event management. • You must install Ruby before using the CEM event handler. When installing Sensu, Ruby can be embedded and found under the following directory (example): /opt/sensu/embedded/bin/ruby --version.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Sensu tile and click Configure. 4. Enter a name for the integration. 5. Click Download file to download and decompress the cem-event-handler-plugin.zip file. Important: The download file contains credential information and should be stored in a secure location. 6. Copy cem.json to /etc/sensu/conf.d.

Chapter 5. Configuring 231 7. Copy cem.rb to /etc/sensu/plugins. 8. Run sudo chmod +x /etc/sensu/plugins/cem.rb. Example: copy cem-event-handler-plugin.zip to a directory and then run the following command at the directory to unzip all files and grant execution permission to cem.rb (for this example unzip must be installed and permission to run sudo is required):

unzip cem-event-handler-plugin.zip 'cem.rb' -d /etc/sensu/plugins; unzip cem-event-handler- plugin.zip 'cem.json' -d /etc/sensu/conf.d; sudo chmod +x /etc/sensu/plugins/cem.rb

9. Add the CEM event handler to each Sensu check definition to send events to event management. Example:

{ "checks": { "check_mem": { "command": "check-memory-percent.rb -w 50", "interval": 60, "subscribers" : [ "dev" ], "handler": "cem" } } }

10. Restart the Sensu services. 11. To start receiving alert notifications from Sensu, ensure that Enable event management from this source is set to On..

Configuring SolarWinds Orion as an event source The SolarWinds Orion platform provides network and system management products. You can set up an integration with Netcool Operations Insight to receive alert information from SolarWinds Orion.

Before you begin Event management supports integration with the Network Performance Monitor and Server and Application Monitor products of the SolarWinds Orion platform. Event management supports Out-Of-The- Box Alerts (OOTBA) for the following common objects in SolarWinds: • Application • Component • Group • Interface • Node • Volume You can check the object type of each alert in Alert Manager by looking at the Property to Monitor column for an alert. If you enable an unsupported alert type, event information might still be sent to event management, but the event title will state "Unsupported SolarWinds object".

About this task Using an XML file, you set up an integration with SolarWinds Orion, and define trigger and reset actions for alerts. The alerts generated by SolarWinds Orion are sent to Netcool Operations Insight as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the SolarWinds tile and click Configure.

232 IBM Netcool Operations Insight: Integration Guide 4. Enter a name for the integration. 5. Click Download file to download the send-alert-cem.xml file. This file contains the settings required for the integration with Netcool Operations Insight, including the webhook URL. Note: • If you edit the integration later and click to download the file again, the current integration will no longer be valid. You will need to set up the integration again. • The download file contains credential information and should be stored in a secure location. 6. Click Save to save the integration in Netcool Operations Insight. 7. Upload the XML file to the Alert Manager in SolarWinds Orion: a) Log in to your SolarWinds Orion account as an administrator. b) Go to ALERTS & ACTIVITY in the menu bar at the top of the window and select Alerts from the list. c) Click Manage alerts. d) Go to EXPORT/IMPORT in the menu bar at the top of the window and select Import Alert from the list. e) Upload the send-alert-cem.xml you downloaded earlier from event management. Note: A new alert called Notify CEM - timestamp is created, together with the associated trigger and reset actions Post Problem Event to CEM - timestamp and Post Resolution Events to CEM - timestamp, where timestamp is in the UTC format. The Notify CEM alert contains settings for the integration between event management and SolarWinds. It is disabled by default and is not intended to be enabled. 8. Define trigger and reset actions for the alerts you want event management to receive event information from: a) In Alert Manager, click the alert you want to edit, and go to the TRIGGER ACTIONS tab. b) Click the Assign Action(s) button. c) Select the Post Problem Event to CEM - timestamp check box and click ASSIGN. d) Click Next to go to the RESET ACTION tab. e) Click the Assign Action(s) button. f) Select the Post Resolution Events to CEM - timestamp check box and click ASSIGN. g) Click Next and then click Submit.

Attention: If you create more than one SolarWinds integration instance, ensure you select the right trigger and reset actions for each integration. For example, for your first integration select Post Problem Event to CEM - timestamp1 and Post Resolution Events to CEM - timestamp1, while for your second integration select Post Problem Event to CEM - timestamp2 and Post Resolution Events to CEM - timestamp2.

Tip: You can also define the trigger and reset actions for more than one alert at the same time. For the trigger action, select the check box for the alerts and select Assign Trigger Action from the ASSIGN ACTION list. Then select the Post Problem Event to CEM - timestamp check box and click ASSIGN. For the reset action, select the check box for the same alerts and select Assign Reset Action from the ASSIGN ACTION list. Then select the Post Resolution Events to CEM - timestamp check box and click ASSIGN. 9. To enable the alert, set Enabled (On/Off) to On in the appropriate rows for the alerts you want to receive event information from. 10. To start receiving alert information from the SolarWinds Orion triggers and reset actions, ensure that Enable event management from this source is set to On..

Chapter 5. Configuring 233 Configuring Splunk Enterprise as an event source Splunk Enterprise is an on-premises version of Splunk that you can use to monitor and analyze machine data from various sources. You can set up an integration with Netcool Operations Insight to receive alert information from Splunk Enterprise.

Before you begin The following event types are supported for this integration: • Splunk App for Infrastructure Monitoring – Monitoring for Linux/UNIX – Monitoring for Windows Note: You can use the Splunk App to define the mapping of Splunk fields with event management fields. Warning: Splunk Enterprise does not provide a means of downgrading to previous versions. If you want to revert to an older Splunk release, uninstall the upgraded version and reinstall the version you want. The Splunk App for UNIX/Linux is currently not supported beyond version 7.2.x.

About this task Using a package of installation and configuration files provided by Netcool Operations Insight, you set up an integration with Splunk Enterprise. The alerts generated by Splunk Enterprise are sent to the Netcool Operations Insight service as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Splunk Enterprise tile and click Configure. 4. Enter a name for the integration. 5. Click Download file to download and decompress the ibm-cem-splunk.zip file. The compressed file contains the savedsearches.conf file for both the UNIX and Windows systems, and the ibm- cem-alert.zip file which contains the file for installing the Splunk App for Netcool Operations Insight. • splunk_app_for_nix/local/savedsearches.conf • splunk_app_windows_infrastructure/local/savedsearches.conf • ibm-cem-alert.zip Important: The download file contains credential information and should be stored in a secure location. 6. Install the Splunk App using the ibm-cem-alert.zip file. a) Log in to your Splunk Enterprise browser UI as an administrator. b) Select App then click Manage Apps. c) Click Install app from file. d) Click Browse to locate the ibm-cem-alert.zip file. e) Click Upload. 7. Log in to your Splunk Enterprise server host and copy the savedsearches.conf file to $SPLUNK_HOME/etc/apps//local. UNIX:

sudo cp ibm-cem-splunk/splunk_app_for_nix/local/savedsearches.conf $SPLUNK_HOME/etc/apps/splunk_app_for_nix/local/savedsearches.conf

Windows:

234 IBM Netcool Operations Insight: Integration Guide copy ibm-cem-splunk\splunk_app_windows_infrastructure\local\savedsearches.conf %SPLUNK_HOME%\etc\apps\splunk_app_windows_infrastructure\local

Important: If you already have an existing Splunk app installed, then you already have settings defined in a savedsearches.conf file. Merge your existing savedsearches.conf file with the one downloaded from Netcool Operations Insight. You can merge the files manually, or use the Splunk Enterprise browser UI by clicking the Alerts tab at the top, expanding the selected alert section, clicking Edit > Edit Alerts, and editing the fields under section IBM Cloud Event Management Alert. You can use the savedsearches.conf file to check the mapping for the values of the fields. 8. Restart the Splunk Enterprise instance to ensure the new alerts are available. UNIX:

sudo $SPLUNK_HOME/bin/splunk restart

Windows:

%SPLUNK_HOME%\bin\splunk.exe restart

9. Log in to the Splunk Enterprise UI as an administrator and check that the alerts defined in savedsearches.conf are available: For UNIX systems, go to Search & Reporting > Splunk App for Unix > Core Views > Alerts. For Windows systems, go to Search & Reporting > Splunk App for Windows Infrastructure > Core Views > Alerts. Note: If you modify the trigger conditions for the alerts, ensure you do not set a trigger interval that is too frequent. For example, if you set the Edit > Edit Alerts > Trigger Conditions to trigger an alert once every minute when the result count is greater than 0, the resulting number of events can overload event management. To limit the trigger frequency, set the greater than value to a higher number than 0, and set it to be triggered 5 times in every hour, for example. You can also use the Throttle option to suspend the triggering of events for a set period after an event is triggered. 10. Optional: To receive resolution events from Splunk Enterprise, add the resolution:true value to the action.ibm_cem_alert.param.cem_custom parameter in the savedsearches.conf file, for example:

# Example ## Automation mapping for IO Utilization Exceeds Threshold Alert ## using IBM Event Management custom webhook alert [IO_Utilization_Exceeds_Threshold] action.ibm_cem_alert = 1 action.ibm_cem_alert.param.cem_custom = statusOrThreshold:$result.bandwidth_util $,resolution:true action.ibm_cem_alert.param.cem_event_type = $name$ action.ibm_cem_alert.param.cem_resource_name = $result.host$ action.ibm_cem_alert.param.cem_resource_type = Server action.ibm_cem_alert.param.cem_severity = Major action.ibm_cem_alert.param.cem_summary = $result.host$: IO utilization exceeds $bandwidth_util$ threshold action.ibm_cem_alert.param.cem_webhook = {{WEBHOOK_URL}}/{{WEBHOOK_USER}}/ {{WEBHOOK_PASSWORD}} disabled = 0

Tip: You can also add the resolution setting using the UI. Open Edit > Edit Alerts under section IBM Cloud Event Management Alert, and add resolution:true to the Additional mapping (optional) field. 11. Click Save to save the integration in Netcool Operations Insight. 12. To start receiving alert notifications from Splunk Enterprise, ensure that Enable event management from this source is set to On..

Chapter 5. Configuring 235 Configuring Sumo Logic as an event source You can set up an integration with Netcool Operations Insight to receive notifications created by Sumo Logic. Sumo Logic is a cloud log management and metrics monitoring solution.

Before you begin Clear events are never sent from Sumo Logic. However, you can set the expiryTime attribute in the payload to automatically clear the resulting event management incidents after a specified time period (in seconds) has elapsed. The following event types are supported for this integration: • All Sumo Logic notifications via the webhook connection.

About this task Using a webhook URL, alerts generated by Sumo Logic are sent to the event management service as events.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Sumo Logic tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file. 5. Click Save. 6. Open the Sumo Logic app and go to Manage Data > Settings > Connections. 7. On Connections, click Add > Webhook. 8. In the Create Connection window, enter the connection Name and (optionally) a description. 9. In the field provided, paste the webhook URL that you copied in step 4. 10. Copy and paste the sample payload from this step into the Payload section. Please note the following: • Attributes with curly brackets {{ }} are Sumo Logic payload variables that do not require updating. • For attributes with angle brackets < > you must provide a valid name or description, as appropriate. • You can customize the payload if required. For more information about the available Webhook payload variables, see the Sumo Logic user guide: https://help.sumologic.com/Manage/ Connections-and-Integrations/Webhook-Connections/Set-Up-Webhook-Connections. If you are customizing the payload, you must include the four mandatory fields in your customized payload (see Table 39 on page 237 for mandatory fields). Sample payload:

{ "resource": { "name":"", "type":"" }, "type": { "eventType":"", "statusOrThreshold":"{{AlertThreshold}}" }, "summary":"", "severity":"{{AlertStatus}}", "urls": [ { "url":"{{SearchQueryUrl}}", "description":"Search Query Url" } ],

236 IBM Netcool Operations Insight: Integration Guide "sender": { "name":"Sumo Logic" }, "expiryTime":300, "searchName":"{{SearchName}}", "searchDescription":"{{SearchDescription}}", "searchQuery":"{{SearchQuery}}", "numRawResults":"{{NumRawResults}}" }

The following table describes the attributes in the payload:

Table 39. Payload attributes Attributes Type Description Required resource.name String The name of the Mandatory resource that caused the event. resource.type String The type of resource Optional that caused the event. type.eventType String Description of the type Mandatory of event. type.statusOrThreshold String The status or the Optional threshold that caused the event. summary String Description of the Mandatory event condition. severity String Severity of the event: Mandatory Critical, Major, Minor, Warning, Information, or Indeterminate. urls[0].url String The URL link to the Optional search or metrics query. This attribute is mandatory if urls[0].description is defined. urls[0].description String Descriptive text for the Optional URL. sender.name String Name of the sender Optional that sent the event to event management. expiryTime Number The number of seconds Optional after which the event will be cleared, if no further occurrence. searchName String Name of the saved Optional search or monitor. searchDescription String Description of the Optional saved search or monitor. searchQuery String The query used to run Optional the saved search.

Chapter 5. Configuring 237 Table 39. Payload attributes (continued) Attributes Type Description Required numRawResults String Number of results Optional returned by the search. 11. Click Test Connection to ensure that the webhook connection with event management is configured correctly. Event management will not process the event if any attributes do not follow the correct JSON format and type. 12. Click Save. 13. To start receiving alert notifications from Sumo Logic, ensure that Enable event management from this source is set to On..

Configuring VMware vCenter Server as an event source VMware vSphere is a centralized management application that lets you manage virtual machines and ESXi hosts centrally. You can set up an integration with Netcool Operations Insight to receive notifications created by VMware vSphere.

Before you begin • You must have permission to run Python 3 on vCenter Server Appliance (VCSA) or PowerShell on Windows. • There is no mechanism in VMware to apply the script in this procedure to multiple alarm definitions at a time. The path to the script must be entered manually in the alarms. The following event types are supported for this integration: • All VMware alarms via the webhook connection.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the VMware vSphere tile and click Configure. 4. Enter a name for the integration. 5. Click Download file to download and decompress the vmware-alarm-action-scripts-for- cem.zip file. 6. Create the following directory: • On Linux/Photon OS: /root/cem • On Windows: C:\cem 7. Transfer and unzip the package to /root/cem or C:\cem. Note: To transfer the package to VCenter Server installed on Linux/Photon you must first run the chsh command to set bash for the login shell before transferring the package. Example:

root@9 [ ~ ]# chsh Changing the login shell for root Enter the new value, or press ENTER for the default Login Shell [/bin/appliancesh]: /usr/bin/bash

8. For information about creating an alarm in the vSphere client, see https://docs.vmware.com/en/ VMware-vSphere/6.7/com.vmware.vsphere.monitoring.doc/GUID-E30ED662- D851-4230-9AFE-1BBBC55C98D6.html. 9. For information about running a script or a command as an alarm action, see https:// docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.monitoring.doc/GUID- AB74502C-5F01-478D-AF66-672AB5B8065C.html.

238 IBM Netcool Operations Insight: Integration Guide Note: You must identify the path to run Python 3 or cmd.exe in vCenter Server. If the vCenter Server is running on Linux/Photon OS, you can run 'which python' to get the path to Python 3. For example, /usr/bin/python. Run /usr/bin/python --version to verify it is Python 3. If the vCenter Server is running on Windows, then you must identify the path to cmd.exe. For example, C:\Windows\System32\cmd.exe. 10. To send an alarm to event management, you must specify the following as an alarm action: • Linux/Photon OS: /usr/bin/python /root/cem/vmware-alarm-action-scripts-for- cem/sendEventToCEM.py • Windows: C:\Windows\System32\cmd.exe "/c echo.|powershell -NonInteractive - File c:\cem\vmware-alarm-action-scripts-for-cem\sendEventToCEM.ps1 Note: You can specify /usr/bin/python /root/cem/vmware-alarm-action-scripts-for-cem/ sendEventToCEM.py --expirytime

What to do next The same script can be used to clear the alarm. In VMware, go to Edit Alarm Definition > Reset Rule > Run script and specify the path from step 10 in the Run this field. Note: due to VMware limitations this does not always get executed.

Creating custom event sources with JSON You can insert event information into Netcool Operations Insight from any event source that can send the information in JSON format.

About this task Using a webhook URL, set your event source to send event information to Netcool Operations Insight. Using an example incoming request in JSON format, define the mapping between the event attributes from your source and the event attributes in Netcool Operations Insight.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Webhook tile and click Configure.

4. Enter a name for the integration and click Copy to add the generated webhook URL to the clipboard. Ensure you save the generated webhook to make it available later in the configuration process. For example, you can save it to a file.

Chapter 5. Configuring 239 Tip: Enter a name that identifies the event source you want to receive event information from. A descriptive name will help you identify the event source integration later. 5. Go to your event source and use the generated webhook URL to configure the event source to send event information to Netcool Operations Insight. Note: When Netcool Operations Insight is deployed in an Red Hat OpenShift environment, the hostname used in the webhook address (which might be an internal Red Hat OpenShift alias) must be resolvable in DNS or the local hosts file where the JSON alerts are being sent from. 6. Copy an incoming JSON request from the event source you are integrating with, and paste it in the Example incoming request field of your event source integration in the Netcool Operations Insight UI. 7. To populate the correct normalized event fields from the incoming request, define the mapping between the JSON request attributes and the normalized event attributes. Note: Four attributes are mandatory as mentioned in this step. You can also set additional attributes to be mapped, as described in the following step. In the Netcool Operations Insight UI, go to your event source integration and enter values for the event attributes in the Event attributes section. Based on this mapping, the Event Management API then takes values from the incoming request to populate the event information that is inserted into Netcool Operations Insight. For more information about the Event Management API, see Developer Tools at https://console.cloud.ibm.com/apidocs/. The following attributes must have a value for an event to be processed by Netcool Operations Insight. Set the mapping in Event attributes > Mandatory event attributes: • Severity: The event severity level, which indicates how the perceived capability of the managed object has been affected. Values are objects and can be one of the following severity levels: "Critical", "Major", "Minor", "Warning", "Information", "Indeterminate", 60, 50, 40, 30, 20, 10 (60 is the highest and 10 is the lowest severity level). • Summary: String that contains text to describe the event condition. • Resource name: String that identifies the primary resource affected by the event. • Event type: String to help classify the type of event, for example, Utilization, System status, Threshold breach, and other type descriptions. See later for mapping examples. Note: The event attributes are validated against the mapping to the incoming request example. If the validation is successful, the output is displayed in the Result field. Important: Ensure you are familiar with the JSON format, see https://www.json.org/. For more complex mappings, use JSONATA functions, see http://docs.jsonata.org/object- functions.html. 8. Optional: In addition to the mandatory attributes, you can set other event attributes to be used, and define the mappings for them. Click Event attributes > Optional event attributes, select the additional attributes, and click Confirm selections. Then define the mapping between the additional normalized event attributes and the JSON request attributes to have the correct values populated for the events in Netcool Operations Insight. See later for mapping examples. Note: Most optional attributes can only be added once. Other attributes such as URLs and Related resources can be added more than once. To remove optional attributes, clear the check box for the attribute, or click delete if it has more than one attribute set (for example, URLs), and click Confirm selections. 9. Click Save to save the event source integration.

Mapping JSON attributes to normalized attributes Ensure you are familiar with the JSON format, see https://www.json.org/.

240 IBM Netcool Operations Insight: Integration Guide For more complex mappings, use JSONATA functions, see http://docs.jsonata.org/object-functions.html. The following example demonstrates the mapping of mandatory event attributes from a JSON request to Netcool Operations Insight:

Table 40. Mapping example for mandatory attributes Attribute in example Normalized event Mapping value Result in event incoming request attribute information

{ Severity Status = "Red" ? Major "Status": "Yellow", "Critical" : Status "Problem": "High = "Yellow" ? CPU usage", "Geo@Location": "Major" : Status = "New York", "Green" ? "Minor" "Host": "abc.nyc", "Type": "Database", "SubType": "DB2", Summary Problem & " in " & High CPU usage in New "Name": "Server A", "ProblemType": `Geo@Location` York "Utilization" } Tip: Back ticks (``) are required in the mapping to interpret attributes with the at sign (@) in the incoming request.

Resource name Name Server A

Event type ProblemType Utilization

The following example demonstrates the mapping of the optional Details and Timestamp event attributes from a JSON request to Netcool Operations Insight: Note: You can select additional attribute fields by clicking Event attributes > Optional event attributes. In this case, select the check box for Details and Timestamp, and click Confirm selections.

Table 41. Mapping example for optional attributes Attribute in example Normalized event Mapping value Result in event incoming request attribute information

{ Details {"value": 96 "Status": "Yellow", trigger.actual_valu "Problem": "High e} CPU usage", "GeoLocation": "New York", Timestamp $now() 2018-02-28T16:35:22.79 "Host": "abc.nyc", "Type": "Database", 7Z "SubType": "DB2", Important: The value can "Name": "Server A", be an integer set in "ProblemType": milliseconds since the "Utilization", "trigger": { Unix Epoch (January "actual_value": 1970), or a ISO-8601 96, "metric_type": format string such as "[monitor, cpu, 1991-09-06 14:56:20. percent-idle]", "threshold": 50, Note: If your event does

"threshold_statistic": not contain time "avg", information, you can use "type": $now() to set a time "threshold" } stamp. }

Chapter 5. Configuring 241 Some optional attributes can have more than one instance, and the number of instances can vary from one event to another. For example, you can add more than one instance of the URL field, and provide a mapping expression for each of them. If your mapping expression produces an array of values (for example, selecting fields from within a list in the incoming request), the corresponding number of instances of the event attribute will automatically be created. The following example demonstrates the mapping of the optional URL event attribute from a JSON request to Netcool Operations Insight: Note: You can select additional attribute fields by clicking Event attributes > Optional event attributes. In this case, click add for URL, and click Confirm selections.

Table 42. Mapping example for optional array type attribute, using the URL attribute Attribute in example Normalized event Mapping value Result in event incoming request attribute information

{ URL 1 > URL urls[0].url URLs "Status": "Yellow", "Problem": "High URL 1 > Description "Launch to ABC http://abcmonitoring.com CPU usage", Monitoring" "GeoLocation": "New Launch to ABC Monitoring York", "Host": "abc.nyc", "Type": "Database", URL 2 > URL urls[1].url URLs "SubType": "DB2", "Name": "Server A", URL 2 > Description "Launch to XYZ http://xyzmonitoring.com "ProblemType": Monitoring" "Utilization", Launch to XYZ Monitoring "trigger": { "actual_value": 96, "metric_type": "[monitor, cpu, percent-idle]", "threshold": 50,

"threshold_statistic": "avg", "type": "threshold" }, "urls": [ {"url": "http:// abcmonitoring.com"}, {"url": "http:// xyzmonitoring.com"} ] }

The following example is also for an array, but using the optional Details attribute, and shows how to change the mapping from the first list within the array to the second list within the array.

242 IBM Netcool Operations Insight: Integration Guide Table 43. Mapping example for optional array type attribute, using the Details attribute Attribute in example Normalized event Mapping value Result in event incoming request attribute information

"alarm_detail": [ Details alarm_detail[0] "apm@proc_user_cpu_nor { m": "-0.02",

"apm@proc_user_cpu_nor "apm@vm_exe_size_mb": m": "-0.02", "0",

"apm@vm_exe_size_mb": "apm@text_resident_siz "0", e": "1",

"apm@text_resident_siz "apm@proc_user_cpu_nor e": "1", m_enum": "Not_Collected", "apm@proc_user_cpu_nor m_enum": "apm@total_busy_cpu_pc "Not_Collected", t": "0",

"apm@total_busy_cpu_pc "apm@total_cpu_time": t": "0", "000d 00h 02m 57s", "apm@proc_cpu": "apm@total_cpu_time": "2", "000d 00h 02m 57s", "apm@proc_cpu": "apm@vm_lib_size": "2", "22116",

"apm@vm_lib_size": "apm@total_size_memory "22116", ": "994599", "apm@vm_stack": "apm@total_size_memory "88", ": "994599", "apm@vm_stack": "apm@tot_minor_faults" "88", : "199", "apm@vm_lock_mb": "apm@tot_minor_faults" "0", : "199", "apm@vm_data_mb": "3627.8", "apm@vm_lock_mb": "0", "apm@proc_system_cpu_n "apm@vm_data_mb": orm_enum": "3627.8", "Not_Collected",

"apm@proc_system_cpu_n "apm@busy_cpu_pct": orm_enum": "104.05", "Not_Collected", "apm@priority": "20", "apm@busy_cpu_pct": "104.05", "apm@user_sys_cpu_pct" "apm@priority": : "0", "20", "apm@session_id": "7331", "apm@user_sys_cpu_pct" : "0", "apm@cpu_seconds": "177", "apm@session_id": "apm@vm_size": "7331", "3978396", "apm@threads": "apm@cpu_seconds": "430", "177", "apm@vm_size": "apm@process_filter": "3978396", " ", "apm@threads": "430", "apm@resident_set_size ": "140867", "apm@process_filter": "apm@process_id": " ", "28967", "apm@vm_size_mb": "apm@resident_set_size "3885.1", ": "140867", "apm@proc_busy_cpu_nor "apm@process_id": m_enum": "28967", "Not_Collected", "apm@time": "apm@vm_size_mb": "00002:57", "3885.1", "apm@state": "0", "apm@vm_data": "apm@proc_busy_cpu_nor "3714920", m_enum": "Not_Collected", "apm@shared_lib_set_si "apm@time": Chapterze": "0", 5. Configuring 243 "00002:57", "apm@state": "apm@data_set_size": "0", "928752", "apm@vm_data": "apm@vm_lock": "3714920", "0",

"apm@shared_lib_set_si "apm@total_cpu_percent ze": "0", ": "4.27", "apm@state_enum": "apm@data_set_size": "Sleeping", "928752", "apm@vm_lock": "apm@process_command_n "0", ame": "java",

"apm@total_cpu_percent "apm@vm_stack_mb": ": "4.27", "0",

"apm@state_enum": "apm@system_cpu_time": "Sleeping", "000d 00h 00m 07s",

"apm@process_command_n "apm@proc_busy_cpu_nor ame": "java", m": "-0.02",

"apm@vm_stack_mb": "apm@vm_lib_size_mb": "0", "21.5", "apm@timestamp": "apm@system_cpu_time": "1150727144822000", "000d 00h 00m 07s", "apm@vm_exe_size": "apm@proc_busy_cpu_nor "4", m": "-0.02", "apm@dirty_pages": "apm@vm_lib_size_mb": "0", "21.5", "apm@tot_proc_user_cpu "apm@timestamp": ": "0", "1150727144822000", "apm@user_cpu_time": "apm@vm_exe_size": "000d 00h 02m 50s", "4", "apm@tot_proc_system_c "apm@dirty_pages": pu": "0", "0", "apm@shared_memory": "apm@tot_proc_user_cpu "18042", ": "0", "apm@system_name": "apm@user_cpu_time": "nc9042036139:LZ", "000d 00h 02m 50s", "apm@proc_user_cpu": "apm@tot_proc_system_c "99.45", pu": "0", "apm@proc_system_cpu_n "apm@shared_memory": orm": "-0.02", "18042", "apm@process_count": "apm@system_name": "1", "nc9042036139:LZ", "apm@parent_process_id "apm@proc_user_cpu": ": "1", "99.45", "apm@nice": "0",

"apm@proc_system_cpu_n "apm@process_group_lea orm": "-0.02", der_id": "28856",

"apm@process_count": "apm@proc_system_cpu": "1", "4.60",

"apm@parent_process_id "apm@tot_major_faults" ": "1", : "0" "apm@nice": "0",

"apm@process_group_lea der_id": "28856", Details alarm_detail[1] "vm_stack": "88", "apm@proc_system_cpu": "4.60", "parent_process_id": "1", "apm@tot_major_faults" : "0" "proc_system_cpu": }, "4.60", { "vm_stack": "process_group_leader_ "88", id": "28856",

"parent_process_id": "tot_minor_faults": "1", "199", "vm_lock_mb": "0", "proc_system_cpu": "vm_data_mb": "4.60", "3627.8",

"process_group_leader_ "proc_user_cpu_norm_en id": "28856", um": "Not_Collected", "vm_lib_size": "tot_minor_faults": "22116", "199", "busy_cpu_pct": "vm_lock_mb": "104.05", "0", "priority": "20", "vm_data_mb": "3627.8", "total_size_memory": "994599", "proc_user_cpu_norm_en "session_id": um": "Not_Collected", "7331", "vm_lib_size": "22116", "user_sys_cpu_pct": "busy_cpu_pct": "0", "104.05", "priority": "proc_system_cpu_norm_ "20", enum": "Not_Collected", "total_size_memory": "cpu_seconds": "994599", "177", "session_id": "process_filter": "7331", " ", "vm_size": "user_sys_cpu_pct": "3978396", "0", "threads": "430", "process_id": "proc_system_cpu_norm_ "28967", enum": "vm_size_mb": "Not_Collected", "3885.1", "cpu_seconds": "time": "177", "00002:57",

"process_filter": " ", "resident_set_size": "vm_size": "140867", "3978396", "threads": "process_command_name" "430", : "java", "process_id": "state_enum": "28967", "Sleeping", "vm_size_mb": "3885.1", "proc_busy_cpu_norm_en "time": um": "Not_Collected", "00002:57", "proc_busy_cpu_norm": "resident_set_size": "-0.02", "140867", "state": "0", "vm_lib_size_mb": "process_command_name" "21.5", : "java", "vm_data": "state_enum": "3714920", "Sleeping", "shared_lib_set_size": "proc_busy_cpu_norm_en "0", um": "Not_Collected", "total_cpu_percent": "proc_busy_cpu_norm": "4.27", "-0.02", "vm_lock": "0", "state": "0", "data_set_size": "928752", "vm_lib_size_mb": "21.5", "system_cpu_time": "vm_data": "000d 00h 00m 07s", "3714920", "vm_stack_mb": "0", "shared_lib_set_size": "timestamp": "0", "1150727144822000", "vm_exe_size": "total_cpu_percent": "4", "4.27", "dirty_pages": "vm_lock": "0", "0",

"data_set_size": "tot_proc_user_cpu": "928752", "0", "user_cpu_time": "system_cpu_time": "000d 00h 02m 50s", "000d 00h 00m 07s", "vm_stack_mb": "proc_system_cpu_norm" "0", : "-0.02", "timestamp": "1150727144822000", "tot_proc_system_cpu": "vm_exe_size": "0", "4", "shared_memory": "dirty_pages": "18042", "0", "system_name": "nc9042036139:LZ", "tot_proc_user_cpu": "0", "tot_major_faults": "0", "user_cpu_time": "proc_user_cpu": "000d 00h 02m 50s", "99.45", "nice": "0", "proc_system_cpu_norm" : "-0.02", "proc_user_cpu_norm": "-0.02", "tot_proc_system_cpu": "0", "text_resident_size": "1", "shared_memory": "vm_exe_size_mb": "18042", "0", "system_name": "proc_cpu": "2", "nc9042036139:LZ", "total_cpu_time": "000d 00h 02m 57s", "tot_major_faults": "0", "total_busy_cpu_pct": "0", "proc_user_cpu": "process_count": "99.45", "1" "nice": "0",

"proc_user_cpu_norm": "-0.02",

"text_resident_size": "1",

"vm_exe_size_mb": "0", "proc_cpu": "2",

"total_cpu_time": "000d 00h 02m 57s",

"total_busy_cpu_pct": "0",

"process_count": "1" } ] Configuring Zabbix as an event source You can set up an integration with Netcool Operations Insight to receive notifications created by Zabbix. Zabbix is an open source monitoring solution for network monitoring and application monitoring.

Before you begin Supported Zabbix versions are: • 3.0 LTS • 3.4 • 4.0 LTS The following event types are supported for this integration: • Host monitoring • Service monitoring • Web monitoring

About this task Download the integration package from event management and import the scripts into your Zabbix server. Sample commands are provided for Linux operating systems. Copy the zabbix- notification.sh notification script into the AlertScriptPath directory of your Zabbix server. Execute the create-zabbix-action.sh script for an out-of-the-box event management integration with Zabbix.

Procedure 1. Click Administration > Integrations with other systems. 2. Click New integration. 3. Go to the Zabbix tile and click Configure. 4. Enter a name for the integration. 5. Click Download file and decompress the zabbix-cem.zip file on the Zabbix server. Important: The download file contains credential information and should be stored in a secure location. 6. Copy zabbix-notification.sh into the AlertScriptPath directory. The AlertScriptPath directory is specified within the Zabbix server configuration file zabbix_server.conf. Use the following command to check the AlertScriptPath directory:

$ grep AlertScriptsPath /etc/zabbix/zabbix_server.conf

Example return:

"AlertScriptsPath=/usr/lib/zabbix/alertscripts"

Use the following command to copy the zabbix-notification.sh to the AlertScriptPath directory (using the directory defined in the result of the previous command):

$ cp zabbix-notification.sh /usr/lib/zabbix/alertscripts/

7. Execute the create-zabbix-action.sh script to create required configuration. The script will create configuration through calling the Zabbix REST API. A new media type, user, and trigger action will be created so that Zabbix notifications can be forwarded to event management. Ensure the file is executable.

244 IBM Netcool Operations Insight: Integration Guide Execute the script with --user --password --ip. If username/password are not provided, the script uses Admin/zabbix by default:

$ ./create-zabbix-action.sh --user Admin --password zabbix --ip 10.0.0.1

Note: The machine where you execute the script (for example the Zabbix server) must be able to connect to the Zabbix API server. Provide the FQDN or IP address of the Zabbix Web-front (API) server by --ip (mandatory). To do this, make the following edit in the zabbix-notifications.sh file:

# Define your Zabbix front-end web server IP/FQDN below # else default use zabbix server hostname host="myserver.office.com" #host=$(echo `hostname -f`) if [ "${host}" == "" ] then host=$(echo ${HOSTNAME}) fi

TO:

Define your Zabbix front-end web server IP/FQDN below else default use zabbix server hostname host="IP" #host=$(echo `hostname -f`) if [ "${host}" == "" ] then host=$(echo ${HOSTNAME}) fi

8. Save the integration in event management. To start receiving notifications from Zabbix, ensure that Enable event management from this source is set to On..

What to do next With this integration the user admin-cem is created under user group Zabbix Administrators. The host that you want to monitor must been assigned to the user group Zabbix Administrators or to user admin- cem before notifications can be sent to event management. Define the host and user/user group permissions in the Zabbix permissions tab: • Administration > User groups > Zabbix Administrators > Permissions • Administration > Users > admin-cem > Permissions

Connections For semi-automated and fully-automated runbooks, which use automations of type script or type BigFix, a connection must be set up to connect to your target endpoints. To trigger runbook executions from incoming events, set up a trigger connection. You can add, edit, or delete the connection for each automation provider or the event trigger provider. You can perform the following actions on each connection tile in the Connections page: Configure If you setting up your environment for the first time, or you want to use another automation provider, you can create a new connection. For more information about how to create a connection, see “Create a connection” on page 246. Edit If your settings changed, for example your username or password, you can edit your connection. Select Edit to open the configuration dialog and apply your changes. You can edit a connection if you are an RBA Author or RBA Approver. Check connection status Navigate to the Connections page to check if the connection is working or if it has failed. Delete If you no longer need a connection, click Delete to delete the connection. You can delete a connection if you have the noi_lead role.

Chapter 5. Configuring 245 For more information about roles, see “Administering users” on page 353.

Create a connection IBM Runbook Automation connects to your target endpoints, for example your on-premise back-end system. If you use automations of type script or BigFix, you must create a Script/BigFix connection to access your target endpoints. If you want to use a trigger, connect to Netcool/Impact also referred to as the event trigger provider. You can configure a connection on the Connections page. To create a new connection, click Configure on the corresponding connection tile. To edit an existing connection, click Edit on the corresponding connection tile. You can configure one connection of each type. For example, you can configure one Script automation provider connection. Follow the on-screen instructions in the New connection window. You can create a new connection with the noi_lead role. For more information about roles, see “Administering users” on page 353.

Event Trigger (Netcool/Impact) Create an Event Trigger connection to map runbooks with Netcool/Impact events. Cloud Event Management does not support the Event Trigger Integration described in this section. Refer to the Cloud Event Management documentation for integration options. You can configure a connection on the Connections page. Click Configure on the Event Trigger tile to open the configuration window. You can configure one Event Trigger connection. Follow the on-screen instructions in the New Connection window. 1. Install IBM Netcool/Impact fix pack V7.1.0.18 or higher Check that the fix pack level of your current Netcool/Impact installation is V7.1.0.18 or higher. A Netcool/Impact license is required. 2. Configure IBM Netcool/Impact For more information about how to configure IBM Netcool/Impact, see “Installing Netcool/Impact to run the trigger service” on page 249. 3. Enter IBM Netcool/Impact access information Enter the URL and credentials for the IBM Netcool/Impact connection. You can specify the certificate of the IBM Netcool/Impact server for additional security. For more details on the connection dialog and the connection parameters, see “Configure a connection to the Netcool/Impact server” on page 259.

IBM BigFix automation provider IBM BigFix automation provider connects to your back-end system through your local IBM BigFix setup. You can configure only one IBM BigFix connection. Follow the step-by-step instructions in the New connection window. You can configure a connection on the Connections page. Click Configure on the BigFix tile to open the configuration window. 2. Enter the IBM BigFix access information It is a requirement that you already have IBM BigFix and Web Reports installed. Check with your system administrator to obtain access information. Enter the URL in the following format:

https://:52311/api

You can specify the certificate of the IBM BigFix server for additional security. You must enter the certificate in PEM format. If the certificate is a self-signed certificate, then just enter the certificate itself. If the IBM BigFix server certificate is signed by a certificate authority (CA), then enter the public certificate of the CA. In any case, the CN field of the IBM BigFix server certificate must include the correct FQDN of the system where IBM BigFix is installed.

246 IBM Netcool Operations Insight: Integration Guide SSH script automation provider Use a script automation provider to connect to your back-end system (targets). The SSH Provider is agentless and connects directly to the target machine. It authenticates using public key-based authentication in SSH. The script automation provider works for back-end systems (targets) running UNIX or Windows. For UNIX the user executing the automation must have sufficient rights to execute these features. • bash – a shell that is used to wrap and execute the specified commands / script. • mktemp – A utility that is used to create a temporary file, which is required for the script execution with this automation provider to work. • openssl – A utility that is used on the target system to decrypt the transferred commands / script.

Defining which RBA user is allowed to run an automation The current public key must be added to all target machines that you plan to execute scripts on via the SSH Provider. Make sure that you add the public key in the correct repository, so that the script can be executed: • By the root user. • By specific user(s) on this target. For example, by putting the key in the authorized_keys file of home directory of this/these specific user(s). Depending on the public key used, any RBA user or only members of specific RBA groups will be able to access the given target system. See step 5 in the procedure below for more information about creating public keys for specific groups.

Defining which UNIX or Windows user is used to run an automation By default, scripts are executed on the target machine using the root username. It is possible to run the script with a different UNIX or Windows user. The username can either be fixed or depend on the RBA user that is currently logged in. For more information, see “Creating Script automations” on page 392.

Defining an SSH jumpserver An optional SSH jumpserver can be specified. If chosen, any connections to target systems will be routed through this jumpserver. See step 3 for more information about using a jumpserver. Note: The jumpserver must be a UNIX system (including Linux and AIX). The jumpserver cannot be a Windows system.

About this task You can configure a connection on the Connections page. Click Configure on the Script tile to open the configuration window and follow the on-screen instructions.

Procedure 1. If you are using a jumpserver you must configure it. Depending on your environment, you might require a jumpserver to access your target endpoints. A jumpserver is an SSH endpoint that is used to connect to the nested SSH endpoints. This is a common approach used to communicate between different network zones. To use a jumpserver with RBA it must have an SSH server running and the nc command must be available. This is used to connect to nested SSH target endpoints. Click Use a jumpserver and specify the following jumpserver credentials: Jumpserver address The hostname or IP address of the jumpserver. Jumpserver port The SSH port of the jumpserver.

Chapter 5. Configuring 247 Jumpserver username The username for authentication on the jumpserver. Jumpserver password The password for authentication on the jumpserver. Any connections to SSH target endpoints will use the specified jumpserver. If you are using the secure gateway client when a jumpserver is specified, the connection between the secure gateway client and the target endpoint will use the jumpserver. 2. On your target machine, register the default public key to enable access to the target endpoints via SSH for all users. Configuring SSH public key authentication for the UNIX root user The displayed public key must be added to all target machines that you plan to execute scripts on via the SSH Provider. This key enables any RBA user to run script automations on the given target endpoint. The key must be added to the authorized_keys file which is usually found in the / root/.ssh/authorized_keys folder. Configuring SSH public key authentication for a specific UNIX user If you want to enforce that only a specific UNIX user can execute the script on this target endpoint you should copy the key to the authorized_keys file in the home directory of the specific user, for example /home/john/.ssh/authorized_keys. You can choose to regenerate the public key by clicking the refresh button in the upper right corner of the public key. Note: Regenerating the public key will delete the old key pair. If you choose to regenerate the key pair you must exchange the public key in each target machine that you plan to access via the SSH Provider. For more information about how to configure which UNIX user is used to run the script, see “Creating Script automations” on page 392. 3. Optionally, you can generate group-specific keys. Use these if you only want users from a specific group to have access to a machine. In this scenario, the default public key can act as a fallback in the event that none of the other keys work. a. Click New public key for groups. b. Select a group, then use the refresh button to create a public key for the selected group. c. The table lists all existing group-specific keys. Use the action buttons on the right to change, delete, or copy the public keys. Note: Runbook Automation will try every eligible public key for a given RBA user to access a target endpoint until it finds an authorized public key. Some target endpoints might have security policies in place that ban further connection after a certain number of unauthorized connections. Therefore, it is good practice to either avoid having too many group-specific public keys or avoid having RBA users in too many different groups.

Ansible Tower automation provider

About this task Complete the following steps to create a connection to an Ansible Tower Server.

Procedure 1. Open the Connections tab. 2. Click Configure on the Ansible Tower tile.

248 IBM Netcool Operations Insight: Integration Guide 3. Enter the base URL of your Ansible Tower server. This URL must contain the protocol, for example: https://ansible-tower-server.mycompany.com:443. 4. Choose an authentication type. You can select Basic Auth to connect with user name and password or API Token to use a bearer token, previously created with Write Scope in Ansible Tower. 5. Enter the chosen authentication information. 6. Optional: Enter the Ansible Tower server certificate or certificate chain. 7. Click Save to store the connection information. Note: When using the standard Ansible Tower installation a self-signed certificate issued for CN localhost might be generated. Make sure to replace that certificate with a certificate issued for the actual domain name you will be using. Otherwise the connection might not work.

Enabling runbook automation for your IBM Netcool Operations Insight on Red Hat OpenShift deployment Enable the launch-in-context menu to start manual or semi-automated runbooks from events for your cloud IBM Netcool Operations Insight on Red Hat OpenShift deployment.

About this task To add the runbook automation entry to the launch-in-context menu for a hybrid deployment, see “Installing Netcool/Impact to run the trigger service” on page 249. Complete the following steps to add the runbook automation entry to the launch-in-context menu of the Netcool/OMNIbus event list for a cloud deployment.

Procedure 1. Log on to Netcool/OMNIbus Web GUI as an administrator. 2. Select Administration > Event Management Tools > Menu Configuration. 3. Select Alerts > Modify. Netcool/OMNIbus Web GUI displays the Menus Editor dialog box. 4. From the available items, select LaunchRunbook. 5. Log on to the Netcool/OMNIbus Web GUI as an administrator. 6. Select the arrow to add LaunchRunbook to the Current Items list. You can optionally rename the menu entry name or add a space between "Launch" and "Runbook". 7. Click Save.

What to do next Install Netcool/Impact and complete post-installation tasks. For more information, see “Installing Netcool/Impact to run the trigger service” on page 249 and “Postinstallation of Netcool/Impact V7.1.0.18 or higher” on page 250. Create triggers and link events with the runbooks. For more information, see “Triggers” on page 400.

Installing Netcool/Impact to run the trigger service Triggers map events to runbooks. Using triggers, an operations analyst can immediately execute the runbook that matches to an event. Netcool/Impact is used to run the trigger service. Attention: The instructions to configure Netcool/Impact only apply to a hybrid deployment of Netcool Operations Insight where Netcool/Impact runs on premises. Netcool/Impact is automatically configured in a full deployment of Netcool Operations Insight on Red Hat OpenShift. For more information, see “Installing on Red Hat OpenShift only” on page 100. A trigger specifies which event maps to which runbooks. All information about a trigger, including the event filters, is stored in Netcool/Impact. Netcool/Impact monitors the incoming events from Netcool/OMNIbus. If an event is found in Netcool/OMNIbus that is stored in Netcool/Impact, the following actions are run:

Chapter 5. Configuring 249 Manual or semi-automated runbooks Netcool/Impact stores the ID of the runbook and the matching parameters in the OMNIbus alerts.status table. The operational analyst can then use the right-click context menu to launch the matching runbook directly from the event. Fully automated runbooks Netcool/Impact runs the runbook via an API call to IBM Runbook Automation. The result is stored in the event journal.

Postinstallation of Netcool/Impact V7.1.0.18 or higher Configure IBM Tivoli Netcool/Impact V7.1.0.18 or higher to integrate IBM Runbook Automation and map runbooks to events. Run the following steps on the Impact server. If there are any secondary Impact server(s) installed, ensure that these servers are stopped. The immediate steps below must be completed on the primary Netcool/Impact server. For secondary Netcool/Impact servers, see “Configuring a secondary ObjectServer” on page 251 and “Configuring a secondary Netcool/Impact server” on page 252. Set up IBM Tivoli Netcool/Impact V7.1.0.18 or higher: Import the database schema and Runbook Automation project Run the following command under the same user account that you used to install Netcool/Impact.. On Linux systems:

$IMPACT_HOME/install/cfg_addons/rba/install_rba.sh

On Windows systems:

/install/cfg_addons/rba/install_rba.bat

Update the ObjectServer IBM Runbook Automation requires additional fields in the alert.status events table. If you have a high availability ObjectServer setup you must run the command for both the primary and backup server. Copy the following file to the ObjectServer:

$IMPACT_HOME/add-ons/rba/db/rba_objectserver_fields_update.sql

Then run the following command:

$OMNIHOME/bin/nco_sql -username -password < rba_objectserver_fields_update.sql

is a placeholder for ObjectServer user name and for ObjectServer password. Note: The < operator is used to pipe the file to the ObjectServer to create the new fields. Configure and test RBA_ObjectServer data source connection 1. Log in to Netcool/Impact. 2. Switch to the RunbookAutomation project from the drop-down menu in the top right corner. 3. Click the Data Model tab. 4. Right-click RBA_ObjectServer datasource and click Edit. 5. Enter the access information for your Netcool/OMNIbus ObjectServer. 6. Test the connection. 7. Save the changes and close the editor. 8. Expand the RBA_ObjectServer data source in the tree view. 9. Right-click RBA_Status data type and click Edit. 10. In the Table Description area, click Refresh Fields for the alerts.status base table.

250 IBM Netcool Operations Insight: Integration Guide 11. Ensure that the table row for the Serial item has a value of Yes in the Key Field column. If the value displayed is No, double-click this table entry to enter edit mode, and select the check box that appears in edit mode. 12. Ensure that Identifier is selected in the drop-down box for the Display Name Field. 13. Save the changes and close the editor. 14. Right-click RBA_Status data type and click View Data Items. 15. Verify that some events from Netcool/OMNIbus are listed. 16. Close the Data Items: RBA_Status view. Configure and test RBA_Derby data source connection 1. Log in to Netcool/Impact. 2. Switch to the RunbookAutomation project from the drop-down menu in the top right corner. 3. Click the Data Model tab. 4. Right-click RBA_Derby datasource and click Edit. 5. Test the connection to ensure you can communicate successfully with your Derby database. Note: If you cannot communicate with the Derby database, ensure that the username, password, host, and port are correctly setup in the datasource. 6. Save the changes and close the editor. (Netcool/Impact V7.1.0.19 only) Configure the RBA_Event Reader 1. Log in to Netcool/Impact. 2. Switch to the RunbookAutomation project from the drop-down menu in the top right corner. 3. Click the Services tab. 4. Right-click the RBA_EventReader service and click Edit. 5. Switch to the Event Mapping tab. 6. In the Event Matching area, select Stop testing after first match. 7. In the Event Locking area, ensure that the Expression field is empty, and the Enable checkbox is not selected. 8. Click Save and close the editor. 9. Restart the RBA_EventReader service. 10. Enable persistence to preserve the manual configuration after a pod restart. Configuring a secondary ObjectServer If there are multiple ObjectServers, perform the following step on each object server: Update the ObjectServer • IBM Runbook Automation requires additional fields in the alert.status events table. If you have a high availability ObjectServer setup you must run the command for both the primary and backup server. Copy the following file to the ObjectServer:

$IMPACT_HOME/add-ons/rba/db/rba_objectserver_fields_update.sql

Then run the following command:

$OMNIHOME/bin/nco_sql -username -password < rba_objectserver_fields_update.sql

is a placeholder for ObjectServer user name and for ObjectServer password. Note: The < operator is used to pipe the file to the ObjectServer to create the new fields.

Chapter 5. Configuring 251 Configuring a secondary Netcool/Impact server If there are any secondary Impact server(s) installed, perform the following steps on the secondary Impact server(s): • The Derby Database Failure Policy must be set to Failover/Failback. • Ensure that the following property is set in the primary: NCI1_server.props: – Add the following property to the $IMPACT_HOME/etc/_server.props file on the primary impact server:

impact.server.preferredprimary=true

When these steps have been completed, start the secondary Impact server(s). The replication in derby will copy across the Runbook Automation configuration to the secondary(s). The impact server logs can be monitored to confirm that the configuration has been copied to the secondary(s). Note: The following steps are executed on the primary server only: 1. Import the database schema and Runbook Automation project. 2. “Configure and test RBA_ObjectServer data source connection” on page 250. 3. “Configure and test RBA_Derby data source connection” on page 251.

Import certificate The Netcool/Impact servers use SSL connections to communicate with IBM Runbook Automation. Therefore the server certificate for Runbook Automation must be imported into Netcool/Impact's truststore. Note: Run the following steps on the Impact server, Impact secondary server(s) if you have them installed, and Impact GUI server(s). Import the signed certificate: 1. On Linux systems, enter the following command to receive the correct certificate:

echo -n | openssl s_client -servername -connect : | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > file.cert

The values for (2 occurrences) and (1 occurrence) are determined as follows: RBA Marketplace (standalone or as part of Cloud Event Management) is rba-mp.us-south.runbookautomation.cloud.ibm.com. is 443 RBA Private Deployment is the host name of your RBA server. is 3005. RBA as part of Cloud Event Management on IBM Cloud Private is the host name of CEM. is 443. RBA as part of Cloud Event Management on IBM Cloud depends on the region used: Sydney: console.au-syd.cloudeventmanagement.cloud.ibm.com. London: console.eu-gb.cloudeventmanagement.cloud.ibm.com. Dallas: console.us-south.cloudeventmanagement.cloud.ibm.com. is 443. No port needs to be specified for Cloud Event Management or RBA on Cloud. For a Runbook Automation Private Deployment the port is 3005.

252 IBM Netcool Operations Insight: Integration Guide If the command does not work in your environment, use the following variant of the command:

ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts - servername -connect :) -scq > file.cert

If errors occur, make sure your exported certificate that is stored in file.cert contains a full and valid certificate. Errors like verify error:num=20:unable to get local issuer certificate occur due to a missing CA root certificate for the DigiCert CA. The certificate begins and ends as follows:

-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----

On Windows systems, use your preferred browser to export the certificate. 2. Use the following command to import the certificate:

Warning: • The import_cert script does not only import the certificate but it also restarts the Netcool/ Impact server and the Netcool/Impact GUI server. If this is a production environment you should run this script during planned maintenance only. • If you have Netcool/Impact running under process control, stop all Netcool/Impact processes in the process control and restart them manually using the stop and start scripts found in $IMPACT_HOME/bin. This is necessary because the import_cert.sh script will start and stop the Netcool/Impact processes. Once the import_cert.sh script completes, stop all Netcool/Impact processes and restart them using the process control. Note: If you need to change your RBA certificate, you must delete the old certificates before running the import script again. Use the following command for the Netcool/Impact server (for example, replace with NCI) and for the Netcool/Impact GUI server (for example, replace with ImpactUI) to delete the outdated RBA certificate from the respective keystores:

$IMPACT_HOME/sdk/bin/keytool -delete -alias rba_certificate -keystore $IMPACT_HOME/wlp/usr/ servers//resources/security/trust.jks -storepass

On Linux systems:

$IMPACT_HOME/install/cfg_addons/rba/import_cert.sh

On Windows systems:

/install/cfg_addons/rba/import_cert.bat

where is your Netcool/Impact admin password.

Creating a custom certificate for Red Hat OpenShift A custom certificate is required for the Runbook Automation and Netcool/Impact integration on Red Hat OpenShift.

About this task If you are still using the default OpenShift ingress certificate, you must update this to a certificate that has the correct Subject Alternate Names set. The default certificate has only *.apps.cluster- domain and this is not sufficient for external connections to Netcool Operations Insight to be trusted. The custom certificate must have at least the following Subject Alternate Names: • *.apps.cluster-domain • *.noi-cr-name.apps.custer-domain

Chapter 5. Configuring 253 For full details of how to configure a custom ingress certificate, go to https://docs.openshift.com/ container-platform/4.4/networking/ingress-operator.html#nw-ingress-setting-a-custom-default- certificate_configuring-ingress. The following instructions describe how to create a self-signed certificate that contains the required Subject Alternate Names and apply it to the OpenShift ingress configuration.

Procedure 1. Create an OpenSSL configuration file similar to the following example. Update the fields to your requirements and update the values for [alt_names] to cover your domain names.

[req] distinguished_name = req_distinguished_name x509_extensions = v3_req prompt = no [req_distinguished_name] C = US ST = VA L = SomeCity O = MyCompany OU = MyDivision CN = apps.custer-domain [v3_req] keyUsage = keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1 = *.apps.custer-domain DNS.2 = *.noi-cr-name.apps.custer-domain

2. Save the file and run it on a system with OpenSSH and run the following command:

openssl req -x509 -nodes -days 730 -newkey rsa:2048 -keyout server.key -out server.crt - config req.conf -extensions 'v3_req'

This creates two files - the key and the certificate. 3. Run the following commands to configure OpenShift to use the key and the certificate for ingress:

oc --namespace openshift-ingress create secret tls custom-certs-default --cert=server.crt --key=server.key

oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'

4. Run the following command on the Netcool/Impact server to import server.crt into the Netcool/ Impact truststore :

./keytool -import -alias 'certificatealias' -file 'path to server.crt on impactserver' - keystore /opt/IBM/tivoli/impact/wlp/usr/servers/NCIP/resources/security/trust.jks

Configure user access rights IBM Runbook Automation must be connected to Netcool/Impact for trigger management. This connection is secured with a user name and a password. If you want to create a new user, run step 1 and map the role in step 2. You can also add the role to an existing user and run step 2 only. 1. To create a user, enter:

cd $IMPACT_HOME/install/security ./configUsersGroups.sh -add -user -password

2. Map the impactRBAUser role to an existing user or to the user that you created.

cd $IMPACT_HOME/install/security ./mapRoles.sh -add -user -roles impactRBAUser

254 IBM Netcool Operations Insight: Integration Guide Update the Netcool/Impact configuration Edit the default values of the Netcool/Impact derby database to configure launch-in-context support and integration with IBM Runbook Automation. Netcool/Impact configuration for fully automated runbooks is stored in the Netcool/Impact derby database in the rbaconfig.defaults table with the exception of the RBAAPIKeyPassword. The table contains the following default values:

Table 44. Default values of Netcool/Impact derby database Default value Description

RBAHost='' RBA Marketplace (standalone or as part of Cloud Event Management) The base URL is https://rba-mp.us- south.runbookautomation.cloud.ibm.com. RBA Private Deployment The base URL is https://. RBA as part of Cloud Event Management on IBM Cloud Private The base URL is https://. RBA as part of Cloud Event Management on IBM Cloud The base URL depends on the region used: Sydney: https://rba.au-syd.cloudeventmanagement.cloud.ibm.com. London: https://rba.eu-gb.cloudeventmanagement.cloud.ibm.com. Dallas: https://rba.us-south.cloudeventmanagement.cloud.ibm.com.

RBAManualExecHost='' For all deployment types except Cloud, RBAManualExecHost= is the same value as RBAHost=. Change the default to the URL you were given as part of your subscription. If you work with the following URL https:// ibmopsmanagement.mybluemix.net/index? subscriptionId=EF0E868D1965A2D1E6F64EDFF51BA985778C690A2, specify ibmopsmanagement.mybluemix.net as default value.

RBAPort='443' No changes are required for RBA Application Port. Change the default value to the port of the runbook service. The default value of the runbook service port is 3005.

RBAProtocol='https' No changes are required for RBA Application Protocol. RBARESTPath='/api/rba/ No changes are required for RBA REST path. ' (Netcool/Impact 7.1.0.13 or below) RBARESTPath='/api/v1/r ba/' (Netcool/Impact 7.1.0.14 or higher)

RBARESTPathToView='api No changes are required for RBA API path. path' (Netcool/Impact 7.1.0.13 or below) RBARESTPathToView='/ap i/v1/rba/' (Netcool/ Impact 7.1.0.14 or higher)

Chapter 5. Configuring 255 Table 44. Default values of Netcool/Impact derby database (continued) Default value Description RBAAPIKeyName='' API key to access Runbook Automation from Netcool/Impact. Generate an appropriate API key by using the API Keys page. For more RBAAPIKeyPassword='' information, see “API Keys” on page 688.The API Key password is not stored in the database but in a separate file, see “Storing the API Key Password” on page 257. Notes: • For more information about how to create API keys, see “API Keys” on page 688. • All executions of fully automated runbooks are linked to the user ID that created the API key. It is therefore recommended to use a functional user ID to create the API keys. The following criteria apply: – Log in to RBA using the functional user ID to create API keys. – All fully automated runbooks that are executed will be linked to this functional user ID. – To investigate fully automated runbook logs, login to RBA using the functional user ID. Runbook executions can be viewed in the Execution page.

RBASubscription='' Your Runbook Automation subscription. You can find your subscription name in the Runbook Automation URL. For example, the URL https://ibmopsmanagement.mybluemix.net/ index? subscriptionId=EF0E868D1965A2D1E6F64EDFF51BA985778C690A2 contains the subscriptionID EF0E868D1965A2D1E6F64EDFF51BA985778C690A2. You can find your subscription name in the CEM URL. For example, the URL https:///landing? subscriptionId=EF0E868D1965A2D1E6F64EDFF51BA985778C690A2 contains the subscriptionID EF0E868D1965A2D1E6F64EDFF51BA985778C690A2. Set this value to "ld". Beginning with Netcool/Impact 7.1.0.14, you no longer need to set this field.

GetHTTPUseProxy='false Required only if your Netcool/Impact server connects to Runbook ' Automation through a proxy server. GetHTTPProxyHost='loca A proxy server is usually not needed for the Netcool/Impact server to lhost' connect to Runbook Automation. GetHTTPProxyPort='8080 '

256 IBM Netcool Operations Insight: Integration Guide Table 44. Default values of Netcool/Impact derby database (continued) Default value Description MaxNumTrialsToUpdateRB Number of seconds on how long to retry for status of a fully automated AStatus='60' (Netcool/ runbook. The recommended value is 600 (10 minutes). If your fully Impact 7.1.0.13 or automated runbooks take longer than this time frame to complete, below) increase the value for this property accordingly. MaxNumTrialsToUpdateRB AStatus='600' (Netcool/Impact 7.1.0.14 or higher)

NumberOfSampleEvents=' Number of sample events that are displayed when a new trigger is 10' created. Sample events are displayed in the View Sample Events dialog when you create a new trigger in IBM Runbook Automation.

Run the following commands to update the fields in the Impact Derby database:

$IMPACT_HOME/bin/nci_db connect

You will be prompted something similar to the following:

ij version 10.8

Connect to your database with the same connection string as identified in “Postinstallation of Netcool/ Impact V7.1.0.18 or higher” on page 250. For example:

ij> connect 'jdbc:derby://localhost:1527/ImpactDB;user=impact;password=derbypass;'; ij>

Use the sample commands to update the fields:

update rbaconfig.defaults set FieldValue ='RBADevelopmentTeam/hytmamvirtex' WHERE FieldName='RBAAPIKeyName'; update rbaconfig.defaults set FieldValue = 'rba.mybluemix.net' WHERE FieldName='RBAHost'; update rbaconfig.defaults set FieldValue = 'ibmopsmanagement.mybluemix.net' WHERE FieldName='RBAManualExecHost'; ; update rbaconfig.defaults set FieldValue = '600' WHERE FieldName='MaxNumTrialsToUpdateRBAStatus'; update rbaconfig.defaults set FieldValue = 'EF0E868D1965A2D1E6F64EDFF51BA985778C690A2' WHERE FieldName='RBASubscription';

Note: Starting with IBM Tivoli Netcool/Impact V7.1.0.17: after you have changed the configuration, open the Netcool/Impact UI, switch to the Policies tab, and run the "RBA_ResetConnectionParameters" policy (you might need to switch to the Global project view to see this policy listed).

Storing the API Key Password The API key password that Netcool/Impact uses to access Runbook Automation is based on the configuration property RBAAPIKeyPassword and must be encrypted by using $IMPACT_HOME/bin/ nci_crypt . For example, an encrypted value can look as follows:

{aes}BE0A8FAB4084D460CBD664FECDE3674A8B728BE7986F56F8B2F542C2056B4A2D1E6F64EDFF51BA985778C690A2E 95

This encrypted value must be stored in a text file within the Netcool/Impact install location. Create the file $IMPACT_HOME/etc/RBAAPIKeyPassword.txt and copy the encrypted value into this file.

Chapter 5. Configuring 257 Note: Starting with IBM Tivoli Netcool/Impact V7.1.0.17: after you have changed the configuration, open the Netcool/Impact UI, switch to the Policies tab, and run the "RBA_ResetConnectionParameters" policy (you might need to switch to the Global project view to see this policy listed). Note: If there are any additional secondary Netcool/Impact server(s) installed, repeat all of the steps on each of the secondary Netcool/Impact server(s). This is necessary because the encrypted value of the API key password depends on the individual Netcool/Impact system and cannot be shared by the primary and secondary Netcool/Impact server(s).

Enable the launch-in-context menu You must enable the launch-in-context menu to start manual or semi-automated runbooks from events. The launch-in-context (LiC) feature enables the display of the Execute Runbook page with the event context (runbook and parameters) if a trigger for the event has been matched. Enabling this feature provides the following functionality in the event console: • The context menu of all events will have an additional entry, named LaunchRunbook. • Click LaunchRunbook to: – Launch the Runbook Automation application, using the runbook ID and parameter mappings as defined in the matching trigger. – Display an alert with an error message, if no trigger has been matched. For more information about how to create a trigger, see “Create a trigger” on page 401.

Before you begin You must add Netcool_OMNIbus_Admin and Netcool_OMNIbus_User in groups/group roles under Access Criteria (step “1.i” on page 259). If a user does not belong to these groups they cannot access the Launch Runbook launch-in-context menu.

Procedure Use the Netcool/OMNIbus Web GUI to create the launch-in-context menu. 1. Create a launch-in-context menu and a corresponding menu entry. a. Log on to Netcool/OMNIbus Web GUI as an administrator. b. Select Administration > Event Management Tools > Tool Configuration. c. Click the Create Tool icon to create a new tool. d. Set the Name field to LaunchRunbook. e. From the Type drop-down field, select Script. f. Make sure the data source, for example OMNIbus, is selected in the data source configuration. g. In the Script field, enter:

var runbookId = '{$selected_rows.RunbookID}'; var runbookParams = '{$selected_rows.RunbookParametersB64}'; runbookParams = encodeURIComponent(runbookParams); if (runbookId == '' || runbookId == ' ' || runbookId == ',' || runbookId == '0' || runbookId == 0) { alert("This event is not linked to a runbook"); } else { var url = 'https:///index? subscriptionId=&dashboard=rba.dashboard.executerunbook&context=%7B %22rbid%22%3A%22{$selected_rows.RunbookID}%22%2C%22bulk_params%22%3A %22{$selected_rows.RunbookParametersB64}%22%7D'; var wnd = window.open(url, '_blank'); }

var runbookId = '{$selected_rows.RunbookID}'; var runbookParams = '{$selected_rows.RunbookParametersB64}'; runbookParams = encodeURIComponent(runbookParams); if (runbookId == '' || runbookId == ' ' || runbookId == ',' || runbookId == '0' ||

258 IBM Netcool Operations Insight: Integration Guide runbookId == 0) { alert("This event is not linked to a runbook"); } else { var url = 'https://CEM_HOST_NAME/cemui/runbooks/library/execute/'+runbookId+'? subscriptionId=&bulk_params='+runbookParams +'&ISC.NEWINSTANCE=false'; var wnd = window.open(url, '_blank'); }

Where: is your base URL to IBM Runbook Automation. is your subscription ID. Obtaining the base URL of Runbook Automation RBA Marketplace (standalone or as part of Cloud Event Management) The base URL is https://rba-mp.us-south.runbookautomation.cloud.ibm.com. RBA as part of Cloud Event Management on IBM Cloud Private The base URL is https://. RBA as part of Cloud Event Management on IBM Cloud The base URL depends on the region used: Sydney: https://console.au-syd.cloudeventmanagement.cloud.ibm.com. London: https://console.eu-gb.cloudeventmanagement.cloud.ibm.com. Dallas: https://console.us-south.cloudeventmanagement.cloud.ibm.com. h. Ensure the Execute for each selected row check box is not selected, as only single events can be selected as the context for launching a runbook. i. Select the appropriate group roles in the Access Criteria section, that is Netcool_OMNIbus_Admin and Netcool_OMNIbus_User. j. Click Save. For more information about configuring launch-in-context support, see Configuring launch-in- context integrations with Tivoli products. 2. Complete the following steps to add the Runbook Automation entry to the launch-in-context menu of the Netcool/OMNIbus event list: a. Log on to Netcool/OMNIbus Web GUI as an administrator. b. Select Administration > Event Management Tools > Menu Configuration. c. Select Alerts > Modify. Netcool/OMNIbus Web GUI displays the Menus Editor dialog box. d. From the available items, select LaunchRunbook. e. Log on to the Netcool/OMNIbus Web GUI as an administrator. f. Select the arrow to add LaunchRunbook to the Current Items list. You can optionally rename the menu entry name or add a space between "Launch" and "Runbook". g. Click Save. Note: It might still be possible to launch a runbook for an event even if a trigger does not exist anymore. This can happen if an event had a runbook ID assigned because a matching filter existed when the event was created.

Configure a connection to the Netcool/Impact server This section outlines the steps for adding a connection to Netcool/Impact in IBM Runbook Automation.

Before you begin Verify that all prerequisites are met, as described in “Event Trigger (Netcool/Impact)” on page 246. The IBM Secure Gateway client is required for Cloud Event Management on Cloud and Runbook Automation on Cloud. It is not required for Cloud Event Management on IBM Cloud Private or a Runbook Automation Private Deployment.

Chapter 5. Configuring 259 Procedure Complete the following steps to configure a new connection: 1. Logon to Bluemix.

2. You can configure a connection on the Connections page. Click Edit on the Event Trigger tile to open the configuration window. 3. Select I want an Event Trigger (OMNIbus/Impact). 4. For the IBM Netcool/Impact REST service URL, enter the host name and port, or the IP address and port, of your local Netcool/Impact GUI server installation, followed by ibm/custom/ RBARestUIServlet. For example, https://9.168.48.28:16311/ibm/custom/ RBARestUIServlet. Note: The URL can be a private IP address. The gateway infrastructure routes your request to the gateway connector, from which you can then use IP addresses local to your environment. For the user and password, add the username and password that you configured in “Configure user access rights” on page 254. In general, you should reference the host name of the Impact server as it is included in the "Common Name (CN)" field of the SSL certificate for the Impact server. For the user and password, add the username and password that you configured in “Configure user access rights” on page 254.

Upgrading Netcool/Impact to a higher fix pack level when the RBA add-on is already installed Migrate the RBA add-on from a previous Netcool/Impact level to a new Netcool/Impact level.

Procedure • If the RBA add-on is installed and configured already within a Netcool/Impact installation, and you upgrade the Netcool/Impact installation to a higher level, the upgrade also refreshes the RBA add-on. So typically no additional action is required. • It is recommended to install Netcool/Impact V7.1.0.19 or higher. • Verify the connection to RBA_Derby: a) Open the Netcool/Impact UI and click Data Model > RBA_Derby > Test Connection. b) If the message says Connection could not be made, complete these additional steps: a. Edit RBA_Derby. b. In Primary Source / Host Name change localhost to the actual host name. c. Click Test Connection. The message should say Connection OK. d. Save RBA_Derby configuration and close the editor. e. Repeat RBA_Derby > Test Connection. The message should say "Connection OK". • Some additional configuration steps are required after you have installed Netcool/Impact V7.1.0.19 or higher: a) Add the new field "RunbookIDArray" to the alerts.status table in the Netcool ObjectServer. If you have a high availability ObjectServer setup you must run the command for both the primary and backup server. Copy the following file to the ObjectServer:

$IMPACT_HOME/add-ons/rba/db/rba_objectserver_fields_updateFP19.sql

Then run the following command:

$OMNIHOME/bin/nco_sql -username -password < rba_objectserver_fields_updateFP19.sql

is a placeholder for ObjectServer user name and for ObjectServer password. Note: The < operator is used to pipe the file to the ObjectServer to create the new fields.

260 IBM Netcool Operations Insight: Integration Guide b) Refresh the fields of the RBA_Status data type. a. Log in to Netcool/Impact. b. Switch to the RunbookAutomation project from the drop-down menu in the top right corner. c. Click the Data Model tab. d. Expand the RBA_ObjectServer data source in the tree view. e. Right-click the RBA_Status data type and click Edit. f. In the Table Description area, click Refresh Fields for the alerts.status base table. g. Click Save and close the editor. c) Configure the RBA_Event Reader. a. Log in to Netcool/Impact. b. Switch to the RunbookAutomation project from the drop-down menu in the top right corner. c. Click the Services tab. d. Right-click the RBA_EventReader service and click Edit. e. Switch to the Event Mapping tab. f. In the Event Matching area, select Stop testing after first match. g. In the Event Locking area, ensure that the Expression field is empty, and the Enable checkbox is not selected. h. Click Save and close the editor. i. Restart the RBA_EventReader service.

Migrating from IBM Workload Automation Agent to SSH

Before you begin 1. Decide if you need a jump server to access endpoints using SSH. By default, the SSH calls are executed from the RBA server. If the RBA server has no connection to the endpoints, a jump server is required. For example, use the system where the TWA agent was installed as a jump server. By default, the SSH calls are executed from the system where the IBM Secure Gateway client is installed. If this system has no connection to the endpoints, a jump server is required. For example, use the system where the TWA agent was installed as a jump server. 2. Plan a maintenance window for the migration. Note that after step 1 of the Migration procedure, you cannot execute automations until step 3 is completed. Note: IBM Workload Automation Agent stores the credentials to access endpoints in a credential vault. Using the SSH provider, this is no longer required. Instead SSH public key authentication is used. This can be done for specific users or the root user. For more information, see “SSH script automation provider” on page 247.

Procedure 1. Delete the existing TWA script connection. 2. Create a new SSH script connection. 3. Follow the steps described in the on-screen dialog. For more information, see “SSH script automation provider” on page 247. 4. If an SSH script connection has been set up and script automations are running successfully on the target endpoints, IBM Workload Automation Agent is no longer required and can be uninstalled.

Chapter 5. Configuring 261 Using the REST API Netcool Operations Insight provides a REST API for operations such as sending test events, and managing event policies.

About this task You need an API key to access your event management service. You can then use the key for the API functions.

Procedure 1. You can use the name and password in your event management service credentials to connect to the API, or you can generate a key as follows. a) Click API Keys on the event management Administration page. b) Click New API key. c) Enter a description for the key in API key description. d) Specify which part of the event management API the key provides access to. Go to Permissions and ensure only those check boxes are selected that you want the key to provide access to. All APIs are selected by default. Clear the ones you do not want the API key to provide access to. Important: Make a note of the APIs the key provides access to. For example, note it in the description you enter for the key. You cannot view or change which APIs were selected for the key later. e) Click Generate. A new name and password are generated. Make a note of these values. Important: The password is hidden by default. To view and be able to copy the password, set Display password to Show. Ensure you make a note of the password. For security reasons the password cannot be retrieved later. If you lose the password, you must delete the API key and generate a new one. f) Click Close. g) Click the icon next to Manage API keys to view the base URL for the API. 2. Use the API key for the API calls to your event management service. You can use the API to create events with custom payloads, and trigger sample events. For more information about the event management APIs, see Developer Tools at https://cloud.ibm.com/docs? tab=api-docs. 3. Use the Runbook Automation API to integrate RBA with other software and tools. The base URL of the RBA API as part of Netcool Operations Insight is https:// /api/v1/rba. For Netcool Operations Insight on OpenShift deployments, use the API keys for Netcool Operations Insight to access the RBA API. For the full interface and complete list of available RBA API endpoints, see https://rba.us- south.cloudeventmanagement.cloud.ibm.com/apidocs-v1.

Configuring on-premises systems Perform the following tasks to configure the components of your on-premises Netcool Operations Insight components.

Connecting event sources to a Netcool Operations Insight on-premises deployment Your IBM Netcool Operations Insight deployment provides rich capabilities to integrate event sources from virtually any event source across local, hybrid, and cloud environments. Set up fast and simple integrations to connect private or public cloud event sources and view data in Web GUI.

262 IBM Netcool Operations Insight: Integration Guide For information about connecting local event sources, using probes and gateways, to your Operations Management deployment, see “Connecting event sources to Netcool Operations Insight on a Cloud deployment” on page 194. For more information about connecting local event sources, using probes and gateways, to your IBM Netcool Operations Insight on premises deployment, see “Quick reference to installing” on page 41.

Connecting event sources from your private cloud environment Learn how to integrate cloud events and view cloud event data in Web GUI. Complete the following steps to integrate private cloud events with your IBM Netcool Operations Insight on premises deployment: 1. Download and install the IBM Tivoli Netcool/OMNIbus Probe for Message Bus. To download the probe, see Netcool/OMNIbus documentation: Generic integrations using the Message Bus Probe . To install the probe, see Netcool/OMNIbus documentation: Installing probes and gateways on Tivoli Netcool/ OMNIbus V8.1 . 2. Configure the Message Bus Probe with the required ObjectServer fields. For more information, see Netcool/OMNIbus documentation: Integrating cloud event management with Netcool Operations Insight . 3. Install event management For more information see event management documentation and https:// www.ibm.com/support/knowledgecenter/SSURRN/com.ibm.cem.doc/em_cem_install_openshift.html . 4. Create an outgoing integration. For more information, see the Cloud Event Management documentation event management documentation: Sending incident details to Netcool/OMNIbus . 5. Configure an event policy to forward events to Netcool Operations Insight. In the Action section, select Forward events. For more information, see event management documentation: Setting up event policies . It may take a moment before event management starts to forward events to Netcool Operations Insight. Verify that event management events appear in the Event List.

Sending events to event management You can also view Netcool Operations Insight events in event management, by creating an incoming integration and installing and configuring the Netcool/OMNIbus Gateway for event management. For more information, see event management documentation: Configuring Netcool/OMNIbus as an event source .

Connecting event sources from your public cloud environment Learn how to integrate public cloud events and view cloud event data in Web GUI. Complete the following steps to connect public cloud events with your IBM Netcool Operations Insight on premises deployment: 1. Download and install the IBM Tivoli Netcool/OMNIbus Probe for Message Bus. To download the probe, see Netcool/OMNIbus documentation: Generic integrations using the Message Bus Probe . To install the probe, see Netcool/OMNIbus documentation: Installing probes and gateways on Tivoli Netcool/ OMNIbus V8.1 . 2. Configure the Message Bus Probe with the required ObjectServer fields. For more information, see Netcool/OMNIbus documentation: Integrating cloud event management with Netcool Operations Insight . 3. Subscribe to cloud event management for your public cloud environment. For more information, see IBM Cloud Event Management on Marketplace . 4. Create an outgoing integration. For more information, see event management documentation: Sending events to Netcool/OMNIbus .

Chapter 5. Configuring 263 5. Configure an event policy to forward events to Netcool Operations Insight. In the Action section, select Forward events. For more information, see event management documentation: Setting up event policies . 6. Configure the secure gateway. For more information, see event management documentation: Sending events to Netcool/OMNIbus via the IBM Secure Gateway . It may take a moment before event management starts to forward events to Netcool Operations Insight. Verify that event management events appear in the Event List.

Sending events to event management You can also view Netcool Operations Insight events in event management, by creating an incoming integration and installing and configuring the Netcool/OMNIbus Gateway for event management. For more information, see event management documentation: Configuring Netcool/OMNIbus as an event source .

Configuring Operations Management Perform the following tasks to configure the components of Operations Management.

Configuring Event Analytics Perform these tasks to configure and optionally customize the system prior to use.

Configuring Event Analytics using the wizard In Netcool/Impact V7.1.0.13 it is recommended to use the Event Analytics configuration wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. The setup wizard guides you through the Event Analytics configuration process. You must run the Event Analytics configuration wizard after upgrading to Netcool/Impact V7.1.0.13 to verify and save your configuration. Note: If you are running the wizard within a load-balancing environment, then first perform the following steps on each Netcool/Impact UI server: 1. Edit the $IMPACT_HOME/etc/server.props file. 2. Set the impact.noi.ui.hostname variable to the hostname of one of the Netcool/Impact UI servers.

To launch the wizard, click Insights and select Event Analytics Configuration. The Event Analytics Configuration wizard consists of two parts: 1. Configuring access to the following databases: • The Tivoli Netcool/OMNIbus Historical Event Database, containing historical event data used to analyze historical events for Event Analytics. • The Tivoli Netcool/OMNIbus ObjectServer, containing live event data to be enriched based on insights derived from Event Analytics processing. 2. Configuring settings to control Event Analytics processing.

Configuring the historical event database Configure access to the Tivoli Netcool/OMNIbus historical event database that contains the data used to analyze historical events for Event Analytics. On the historical event database window, you specify the database type, connection details, table name, and timestamp format.

Before you begin If you have custom fields in your Historical Event database, then before doing this task you must first map the custom field names to the corresponding standard field names in Netcool/Impact by creating a database view, as described in “Mapping customized field names” on page 294. In the appropriate step in

264 IBM Netcool Operations Insight: Integration Guide the procedure below, you must specify that database view name instead of the Historical Event database reporter_status table name. You can set up an Oracle database as your historical event database with a custom URL. You can also configure a system identifier (SID) or Service Name in order to connect the database. The SID or Service Name configuration settings are not available in the Event Analytics Configuration wizard. Instead refer to the backend configuration instructions: “Configuring Oracle database connection within Netcool/Impact” on page 275.

Procedure 1. Specify the database type used for the historical event database: • Db2 • Oracle • MS SQL Server 2. Enter the connection details for the database in the fields provided. Hostname Enter the name of the server hosting the database. Port Enter the port number to be used to connect to the server that hosts the database. Username Enter the username for connecting to the database. Password Enter the password for the specified username. The remaining fields in this section differ depending on the database type selected in step 1. • If you selected a database type of Db2 or MS SQL Server, then complete the following field: Database Name In the Database Name field enter the name of the database you want to access. For example REPORTER. • If you selected a database type of Oracle, then complete the following fields: Communication method Specify whether to use a custom URL, Oracle SID (alphanumeric system identifier), or Oracle service name. Custom URL If you set Communication method to Custom URL, then specify the URL in the :: format. Note: For Real Application Clusters (RAC) servers, see additional information from the Tivoli Netcool/OMNIbus Knowledge Center here: https://www.ibm.com/support/knowledgecenter/ SSSHYH_7.1.0.19/com.ibm.netcoolimpact.doc/user/rac_cluster_support.html SID If you set Communication method to SID, then specify the SID. Service name If you set Communication method to Service name, then specify the service name. 3. Click Connect to validate your connection to the historical event database. 4. For the table you want to query, select a Database schema and History table or view from the drop- down lists provided. a) The options available under Database schema are based on the username provided to connect to the historical event database. b) The options available under History table are based on the selected Database schema.

Chapter 5. Configuring 265 Important: If you have custom fields in your Historical Event database, and you created a database view to map these fields, as described in “Mapping customized field names” on page 294, then you must select that database view from the History table drop-down list. 5. Specify the timestamp field used in the historical event database to store the first occurrence of an event.

Specifying the primary and backup ObjectServer On the ObjectServer window, enter the hostname, port, and user credentials to connect to the primary and backup ObjectServers.

Procedure 1. In the fields provided, enter the connection details to the primary ObjectServer: Hostname Enter the hostname where the primary ObjectServer is installed. Port Specify the port number that the primary ObjectServer will use. Username Enter the username to access the ObjectServer. Password Enter the password for the specified username. 2. To enable a backup ObjectServer, select the Enable backup ObjectServer check box and enter the connection details: Note: Selecting Enable backup ObjectServer enables the fail back option when Impact cannot connect to the database. Deselecting this option disables the backup. Hostname Enter the hostname where the backup ObjectServer is installed. Port Specify the port number that the backup ObjectServer will use. The Username and Password are same as the credentials specified for the primary ObjectServer. Click Connect to connect to the ObjectServer.

Adding report fields Select report fields to add additional information to seasonal and related event reports, historical event reports, and instance reports.

Before you begin If you add any custom columns to any of the reports defined in the Event Analytics Setup Wizard and define a column title for this column, then, if you want the column title to appear in a non-English language in the relevant Event Analytics report, you must edit the relevant customization files. For more information about rendering column labels in a non-English language, see http://ibm.biz/impact_trans. If you do not do this, then when the column title is displayed in the relevant Event Analytics report, it will appear in English, regardless of the locale of the browser. If you encounter a blank screen when you reach this page then there might be missing fields in the Historical Event Database. To resolve this issue, see the relevant section in “Troubleshooting Event Analytics configuration” on page 673.

Procedure 1. Specify the Aggregate fields.

266 IBM Netcool Operations Insight: Integration Guide You can add aggregate fields to the seasonal and related event reports, by applying predefined aggregate functions to selected fields from the Historical Event Database. These fields are displayed in the seasonal and related event reports in the same order as they appear below. Example: To display the maximum severity of all the events that make up a related event group select the SEVERITY field, then apply the Max aggregate function, and click Include in reports: Related. Note: Once you have saved these settings, you must rerun the relevant configuration scans in order for the changes to become visible in your seasonal and related event reports. 2. Specify the Historical report fields. The fields specified here are displayed in the historical event report, in the same order as they appear below. The historical event report is shown when you drill in from a seasonal event to its contributing historical events. Note: When you next open the historical event report, wait 20 seconds for your changes to appear. 3. Specify the Instance report fields. The fields specified here are displayed as additional fields in the instance report for a related event group, in the same order as they appear below. The instance report is shown when you drill into event details from a related event group, to show the instances of that group. Note: It is best practice to add the Identifier column to the Instance report fields list. This ensures that you can see the event identifier value for each event unique event listed. Note: When you next open the instance event report, wait 20 seconds for your changes to appear. 4. Click Save to save your changes.

Configuring event suppression Some events might not be important with respect to monitoring your network environment. For events that do not need to be viewed or acted on, event suppression is available as an action when creating a seasonal event rule.

About this task For seasonal event rules, specify the ObjectServer fields to use for suppressing and unsuppressing events.

Procedure 1. To suppress an event, select a Suppression field and Suppression field value from the drop-down lists provided. The field and value that you define here are used to mark the event for suppression when the incoming event matches the seasonal event rule with event suppression selected as one of its actions. 2. To unsuppress an event, select an Unsuppression field and Unsuppression field value from the drop- down lists provided. The field and value that you define here are used to unsuppress an event when the incoming event matches the seasonal event rule with event suppression selected as one of its actions. 3. Click Save to save your changes.

Configuring event pattern processing Configure how patterns are derived from related events using this example-driven wizard panel.

Before you begin To configure event pattern processing, you must specify Historical Event Database columns to use for settings such as event type, event identity, and resource, or accept the columns specified as default. If you want to use custom columns, then you must first configure the Impact Event Reader to read these custom fields, as described in the following topic: Netcool/Impact Knowledge Center: OMNIbus event reader service .

Chapter 5. Configuring 267 About this task An event pattern is a set of events that typically occur in sequence on a network resource. For example, on a London router LON-ROUTER-1, the following sequence of events might frequently occur: FAN- FAILURE, POWER-SUPPLY-FAILURE, DEVICE-FAILURE, indicating that the router fan needs to be changed. Using the related event group feature, Event Analytics will discover this sequence of events as a related event group on LON-ROUTER-1. Using the event pattern feature, Event Analytics can then detect this related event group on any network resource. In the example above, the related event group FAN-FAILURE, POWER®-SUPPLY-FAILURE, DEVICE-FAILURE detected on the London router LON-ROUTER-1 can be stored as a pattern and that pattern can be detected on any other network resource, for example, on a router in Dallas, DAL- ROUTER-5.

Procedure 1. Select the appropriate Historical Event Database column(s) for the following Global settings: Default event type An event type is a category of event, for example: FAN-FAILURE, POWER-SUPPLY-FAILURE and DEVICE-FAILURE are event types. By default event type information is stored in the following Historical Event Database column: ALERTGROUP. If you have another set of events that you categorize in a different way, then you can specify additional event type columns in section 2 below. Default event identity The event identity uniquely identifies an event on a specific network resource. By default the event identity is stored in the following Historical Event Database column: IDENTIFIER. Resource A resource identifies a network resource on which events occur. In the example, LON-ROUTER-1 and DAL-ROUTER-5 are examples of resources on which events occur. By default this resource information is stored in the following Historical Event Database column: NODE. 2. If you have another set of events that you categorize in a different way, you can add them as Additional event types. a) Select the check box to enable Additional event types. b) Click Add new. Add a row for each distinct set of events. c) Specify the filters and fields listed below for each set of events. Event Analytics uses these settings to determine event patterns for a set of events. Filters are applied from top to bottom, in the order that they appear in the table. You can change the order by using the controls at the end of the row. Type name Specify the type name. Use a name that is easily understandable as it will be used later to identify this event type when associating specific event types with an event analytics configuration. Note: If at a later stage you are editing this page, and the event type has been associated with one or more event analytics configurations, then the Type name field is read-only. Database filter Specify the filter that matches this set of historical events in the Historical Event Database. ObjectServer filter Specify the filter that matches the corresponding set of live events in the ObjectServer. The ObjectServer filter should be semantically identical to the Database filter, except that you should specify ObjectServer column syntax for the columns. Event type field An event type is a category of event, for example: FAN-FAILURE, POWER-SUPPLY-FAILURE, and DEVICE-FAILURE are event types. For this set of events, specify the Historical Event Database column that stores event type information.

268 IBM Netcool Operations Insight: Integration Guide Event identity field(s) The event identity uniquely identifies an event on a specific network resource. For this set of events, specify the Historical Event Database column or columns that stores event identity information. Note: You can delete any of the additional event types by clicking the trash can delete icon. If the type is already being used in one or more analytics configurations then deleting the type will remove it from those configurations. To ensure your analytics results are fully synchronized you should rerun the affected analytics configurations.

Reviewing the configuration On the Summary window, review your settings. You can also save the settings here or click Back to make changes to the settings that you configured.

Procedure 1. Review the settings on the Summary window. Click Back or any of the navigation menu links to modify the settings as appropriate. 2. When you are satisfied with the configuration settings, click Save.

Exporting the Event Analytics configuration Use the nci_trigger command to export a saved Event Analytics configuration to another system.

Procedure 1. To generate a properties file from the command-line interface, use the following command:

nci_trigger server / NOI_DefaultValues_Export FILENAME directory/filename

Where: SERVER The server where Event Analytics is installed. The user name of the Event Analytics user. The password of the Event Analytics user. directory The directory where the file is stored. filename The name of the properties file. For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/eventanalytic.props

2. To import the modified properties file into Netcool/Impact, use the following command:

nci_trigger SERVER / NOI_DefaultValues_Configure FILENAME directory/filename

For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/eventanalytic.props

Chapter 5. Configuring 269 Generated properties file Overwritten property values can be updated in the generated properties file. You can edit the generated properties file to set up and customize Netcool/Impact for seasonal events and related events. The following properties are in the generated properties file.

# ############################################################################# #### NOI Shared Configuration #### ############################################################################# # # If you are updating the Rollup configuration, go to # The end of the file # Following section holds the configuration for accessing # Alerts historical information and storing results # history_datasource_name Contains the Impact datasourcename # history_datatype_name Contains the Impact datatype name # history_database_type Contains the Impact datasource type (Db2, Oracle, MSSQL) # history_database_table Contains the database table and if required, the schema, to access the event history # results_database_type Contains the database type for storing results. # # Most likely you do not have to change this configuration # history_datasource_name=ObjectServerHistoryDb2ForNOI history_datatype_name=AlertsHistoryDb2Table history_database_table=Db2INST1.REPORTER_STATUS history_database_type=Db2 # # # results_database_type=DERBY # # Column name for the analysis # history_column_names_analysis=SUMMARY # # The column name where the timestamp associated with the records is stored # history_column_name_timestamp=FIRSTOCCURRENCE # # # history_db_timestampformat=yyyy-MM-dd HH:mm:ss.SSS configuration_db_timestampformat=yyyy-MM-dd HH:mm:ss.SSS # ############################################################################# #### Seasonality Only Configuration #### ############################################################################# # # Will only save and process events of this confidence level or higher # save_event_threshold=.85 # # Used in determining the confidentialiy level ranges, by determining the threshold values. # level_threshold_high Level is high, when confidentiality is greater than or equal to # level_threshold_medium Level is medium, when confidentiality is greater than or equal to # level_threshold_low Level is low, when confidentiality is greater than or equal to # If the confidentiality doesn't meet any of these conditions, level will be set to unknown. # level_threshold_high=99 level_threshold_medium=95 level_threshold_low=0 # # Rollup configuration adds additional information to the Seasonal Report data # number_of_rollup_configuration Contains the number of additional rollup configuration # rollup_ _column_nameContains the column name from which the data is retreived # rollup_ _type Contains the type value # rollup_ _display_name A name that needs to be defined in the UI # Types can be defined as follows : # MAX, MIN, SUM, NON_ZERO, DISTINCT and EXAMPLE # MAX: The maximum value observed for the column, if no value is ever seen

270 IBM Netcool Operations Insight: Integration Guide this will default to Integer.MIN_VALUE # MIN: The minimum value observed for the column, if no value is ever seen this will default to Integer.MAX_VALUE # SUM: The sum of the values observed for the column. # NON_ZERO: A counting column, that counts "Non-Zero"/"Non-Blank" occurrences of events, this can be useful to # track the proportion of events that have been actioned, or how many events had a ticket number associated # with them. # DISTINCT: The number of distinct values that have been seem for this key, value pair # EXAMPLE: Show the first non-blank "example" of a field that contained this key, useful when running seasonality on a # field that can't be accessed, such as ALERT_IDENTIFIER, and you want an example human readable # SUMMARY to let you understand the type of problem # number_of_rollup_configuration=2 rollup_1_column_name=SEVERITY rollup_1_type=MIN rollup_1_display_name=MINSeverity rollup_2_column_name=SEVERITY rollup_2_type=MAX rollup_2_display_name=MAXSeverity # ############################################################################# #### Related Events Only Configuration #### ############################################################################# # # Rollup configuration adds additional information to the Related Events data # reevent_number_of_rollup_configuration Contains the number of additional rollup configuration # reevent_rollup_ _column_nameContains the column name from which the data is retrieved # reevent_rollup_ _type Contains the type value # reevent_rollup_ _display_name A name that needs to be defined in the UI # reevent_rollup_ _actionable Numeric only column that determines the weight for probable root cause # Types can be defined as follows : # MAX, MIN, SUM, NON_ZERO, DISTINCT and EXAMPLE # MAX: The maximum value observed for the column, if no value is ever seen this will default to Integer.MIN_VALUE # MIN: The minimum value observed for the column, if no value is ever seen this will default to Integer.MAX_VALUE # SUM: The sum of the values observed for the column. # NON_ZERO: A counting column, that counts "Non-Zero"/"Non-Blank" occurrences of events, this can be useful to # track the proportion of events that have been actioned, or how many events had a ticket number associated # with them. # DISTINCT: The number of distinct values that have been seem for this key, value pair # EXAMPLE: Show the first non-blank "example" of a field that contained this key, useful when running Seasonality on a # field that can't be accessed, such as ALERT_IDENTIFIER, and you want an example human readable # SUMMARY to let you understand the type of problem # reevent_number_of_rollup_configuration=3 reevent_rollup_1_column_name=ORIGINALSEVERITY reevent_rollup_1_type=MAX reevent_rollup_1_display_name=MAXSeverity reevent_rollup_1_actionable=true reevent_rollup_2_column_name=ACKNOWLEDGED reevent_rollup_2_type=NON_ZERO reevent_rollup_2_display_name=Acknowledged reevent_rollup_2_actionable=true reevent_rollup_3_column_name=ALERTGROUP reevent_rollup_3_type=EXAMPLE reevent_rollup_3_display_name=AlertGroup reevent_rollup_3_actionable=false # # Group Information adds additional group information under the Show Details -> Group More Information portion of the UI # reevent_num_groupinfo Contains the number of group information columns to display # reevent_groupinfo_ _columnContains the column name from which the data is retrieved # The following columns are allowed : # PROFILE, EVENTIDENTITIES, INSTANCES, CONFIGNAME, TOTALEVENTS, UNIQUEEVENTS, REVIEWED, GROUPTTL

Chapter 5. Configuring 271 # PROFILE: The relationship profile, or strength of the group. # EVENTIDENTITIES: A comma separated list that creates the event identity. # INSTANCES: The total number of group instances. # CONFIGNAME: The configuration name the group was created under. # TOTALEVENTS: The total number of events within the group. # UNIQUEEVENTS: The total number of unique events within the group. # REVIEWED: Whether the group has been reviewed by a user or not. # GROUPTTL: The number of seconds the group will stay active after the first event occurs. # reevent_num_groupinfo=3 reevent_groupinfo_1_column=PROFILE reevent_groupinfo_2_column=EVENTIDENTITIES reevent_groupinfo_3_column=INSTANCES # # Event Information adds additional event information under the Show Details -> Event More Information portion of the UI # reevent_num_eventinfo Contains the number of event information columns to display # reevent_eventinfo_ _columnContains the column name from which the data is retrieved # The following columns are allowed : # PROFILE, INSTANCES, EVENTIDENTITY, EVENTIDENTITIES, CONFIGNAME, and GROUPNAME # PROFILE: The relationship profile, or strength of the related event. # INSTANCES: Total number of instance for the related event. # EVENTIDENTITY: The unique event identity for the related event. # EVENTIDENTITIES: A comma separated list that creates the event identity. # CONFIGNAME: The configuration name the related event was created under. # GROUPNAME: The group name the related event is created under. # reevent_num_eventinfo=1 reevent_eventinfo_1_column=INSTANCES # ################################################################################## # The following properties are used to configure event pattern creation ## # type.resourcelist= ## # type.servername.column= ## # type.serverserial.column= ## # type.default.eventid=

type_number_of_type_configurations=1 type.0.eventid=SUMMARY type.0.eventtype=ALERTGROUP type.0.filterclause=( Severity >=3 ) type.0.osfilterclause=Severity >=3 # ################################################################################## # The following properties are used to configure Name Similarity NS ## ##################################################################################

name_similarity_feature_enable=true

272 IBM Netcool Operations Insight: Integration Guide name_similarity_default_pattern_enable=false name_similarity_default_threshold=0.9 name_similarity_default_lead_restriction=1 name_similarity_default_tail_restriction=0

Configuring Event Analytics using the command line You can configure Event Analytics from the command line using the ./nci_trigger utility.

Before you begin In Netcool/Impact v7.1.0.13 it is recommended to use the Event Analytics configuration wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. For more information, see “Configuring Event Analytics using the wizard” on page 264.

Configuring the historical event database Configure access to the Tivoli Netcool/OMNIbus historical event database that contains the data used to analyze historical events for Event Analytics.

Before you begin In Netcool/Impact v7.1.0.13 it is recommended to use the Event Analytics configuration wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. For more information, see “Configuring Event Analytics using the wizard” on page 264.

Configuring Db2 database connection within Netcool/Impact You can configure a connection to a valid Db2 database from within IBM Tivoli Netcool/Impact.

Before you begin In Netcool/Impact v7.1.0.13 and later releases, it is recommended to use the Event Analytics configuration wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. For more information, see “Configuring Event Analytics using the wizard” on page 264.

About this task Users can run seasonality event reports and related event configurations, specifying the time range and name with Db2. Complete the following steps to configure the ObjectServer data source or data type.

Procedure 1. Log in to the Netcool/Impact UI.

https://impacthost:port/ibm/console

2. Configure the ObjectServer data source and data type. a) In the Netcool/Impact UI, from the list of available projects, select the NOI project. b) Select the Data Model tab and select ObjectServerForNOI. 1) Click Edit and enter information for , , , . 2) To save the Netcool/Impact data source, click Test Connection, followed by the Save icon. c) Edit the data type. Expand the data source and edit the data type to correspond to the ObjectServer event history database type. For example, AlertsForNOITable d) For Base Table, select . e) To update the schema and table, click Refresh and then click Save. f) Select the Data Model tab and select ObjectServerHistoryDb2ForNOI. 1) Click Edit and enter information for , , , .

Chapter 5. Configuring 273 2) To save the Netcool/Impact data source, click Test Connection, followed by the Save icon. g) Edit the data type. Expand the ObjectServerHistoryDb2ForNOI data source and edit AlertsHistoryDb2Table. h) For Base Table, select and i) To update the schema and table, click Refresh and then click Save. j) Select the Services tab and ensure that the following services are started. ProcessRelatedEvents ProcessSeasonalityEvents ProcessRelatedEventConfig 3. Configure the Db2 database connection within Netcool/Impact if it was previously configured for Oracle or MSSQL. The following steps configure the report generation to use the Db2 database. Export the default properties, change the default configuration, and update the properties. a) Generate a properties file, go to the /bin directory to locate the nci_trigger, and run the following command from the command-line interface.

nci_trigger / NOI_DefaultValues_Export FILENAME directory/filename

where The server where Event Analytics is installed. The user name of the Event Analytics user. The password of the Event Analytics user. directory The directory where the file is stored. filename The name of the properties file. For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/seasonality.props

b) Update the properties file. Some property values are overwritten by the generated properties file, you might need to update other property values in the generated properties file. For a full list of effected properties, see “Generated properties file” on page 270. • If you do not have the following parameter values, update your properties file to reflect these parameter values.

history_datasource_name=ObjectServerHistoryDb2ForNOI history_datatype_name=AlertsHistoryDb2Table history_database_table= history_database_type=Db2

c) Import the modified properties file into Netcool/Impact, enter the following command.

nci_trigger / NOI_DefaultValues_Configure FILENAME directory/filename

For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/seasonality.props

274 IBM Netcool Operations Insight: Integration Guide Related tasks Installing Netcool/OMNIbus and Netcool/Impact

Configuring Oracle database connection within Netcool/Impact You can configure a connection to a valid Oracle database from within IBM Tivoli Netcool/Impact.

Before you begin In Netcool/Impact v7.1.0.13 and later releases, it is recommended to use the Event Analytics configuration wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. For more information, see “Configuring Event Analytics using the wizard” on page 264. To use Oracle as the archive database, you must set up a remote connection to Netcool/Impact. For more information, see remote connection.

About this task Users can run seasonality event reports and related event configurations, specifying the time range and name with Oracle. Complete the following steps to configure the ObjectServer data source or data type.

Procedure 1. Log in to the Netcool/Impact UI.

https://impacthost:port/ibm/console

2. Configure the ObjectServer data source and data type. a) In the Netcool/Impact UI, from the list of available projects, select the NOI project. b) Select the Data Model tab, and select ObjectServerForNOI. 1) Click Edit and enter information for , , , and . 2) Save the Netcool/Impact data source. Click Test Connection, followed by the Save icon. c) Edit the data type. Expand the data source ObjectServerForNOI and edit the data type to correspond to the ObjectServer event history database type. For example, AlertsForNOITable. d) For Base Table, select . e) To update the schema and table, click Refresh and then click Save. f) Select the Data Model tab, and select ObjectServerHistoryOrclForNOI. 1) Click Edit and enter information for , , , , and . 2) Save the Netcool/Impact data source. Click Test Connection, followed by the Save icon. g) Edit the data type. Expand the data source ObjectServerHistoryOrclForNOI and edit the AlertsHistoryOrclTable data type. h) For Base Table, select and . i) To update the schema and table, click Refresh and then click Save. j) Edit the data type. Expand the data source ObjectServerHistoryOrclForNOI and edit the SE_HISTORICALEVENTS_ORACLE data type. k) For Base Table, select and . l) To update the schema and table, click Refresh and then click Save. m) Select the Services tab and ensure that following services are started: ProcessRelatedEvents ProcessSeasonalityEvents ProcessRelatedEventConfig

Chapter 5. Configuring 275 3. Configure the report generation to use the Oracle database. Export the default properties, change the default configuration, and update the properties. a) Generate a properties file. Go to the /bin directory to locate the nci_trigger utility, and run the following command from the command-line interface:

nci_trigger / NOI_DefaultValues_Export FILENAME directory/filename

Where The server where Event Analytics is installed. The user name of the Event Analytics user. The password of the Event Analytics user. directory The directory where the properties file is stored. filename The name of the properties file. For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/seasonality.props

b) You need to modify the property values that are overwritten by the generated properties file. For a full list of properties, see “Generated properties file” on page 270. • If you do not have the following values for these properties, update your properties file to reflect these property values:

history_datasource_name=ObjectServerHistoryOrclForNOI history_datatype_name=AlertsHistoryOrclTable history_database_table= history_database_type=Oracle

• Enter the following value, which is the Oracle database timestamp format from the policy, to the history_db_timestampformat property:

history_db_timestampformat=yyyy-mm-dd hh24:mi:ss

Note: The history_db_timestampformat property delivers with the properties file with a default value of yyyy-MM-dd HH:mm:ss.SSS. This default timestamp format for the history_db_timestampformat property does not work with Oracle. Thus, you need to perform the previous step to change the default value to the Oracle database timestamp format from the policy (yyyy-mm-dd hh24:mi:ss). c) Import the modified properties file into Netcool/Impact using the following command:

nci_trigger / NOI_DefaultValues_Configure FILENAME directory/filename

For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/seasonality.props

276 IBM Netcool Operations Insight: Integration Guide Configuring MS SQL database connection within Netcool/Impact You can configure a connection to a valid MS SQL database from within IBM Tivoli Netcool/Impact.

Before you begin In Netcool/Impact v7.1.0.13 and later releases, it is recommended to use the Event Analytics configuration wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. For more information, see “Configuring Event Analytics using the wizard” on page 264. MS SQL support requires, at minimum, IBM Tivoli Netcool/Impact 7.1.0.1. To use MS SQL as the archive database, you must set up a remote connection to Netcool/Impact. For more information, see remote connection.

About this task Users can run seasonality event reports and related event configurations, specifying the time range and name with MS SQL. Complete the following steps to configure the ObjectServer data source and data type.

Procedure 1. Log in to the Netcool/Impact UI.

https://impacthost:port/ibm/console

2. Configure the ObjectServer data source and data type. a) In the Netcool/Impact UI, from the list of available projects, select the NOI project. b) Select the Data Model tab and select ObjectServerForNOI. 1) Click Edit and enter the following information , , , . 2) Save the Netcool/Impact data source. Click Test Connection, followed by the Save icon. c) Edit the data type, expand the data source and edit the data type to correspond to the ObjectServer event history database type. For example, AlertsForNOITable d) For Base Table, select . e) To update the schema and table, click Refresh and then click Save. f) Select the Data Model tab and select ObjectServerHistoryMSSQLForNOI. 1) Click Edit and enter the following information , , , , . 2) Save the Netcool/Impact data source. Click Test Connection, followed by the Save icon. g) Edit the data type. Expand the data source ObjectServerHistoryMSSQLForNOI and edit AlertsHistoryMSSQLTable. h) For Base Table, select . i) To update the schema and table, click Refresh and then click Save. j) Select the Services tab and ensure that the following services are started. ProcessRelatedEvents ProcessSeasonalityEvents ProcessRelatedEventConfig 3. Configure the report generation to use the MS SQL database. a) Generate a properties file, go to the /bin directory to locate the nci_trigger and in the command-line interface enter the following command.

Chapter 5. Configuring 277 nci_trigger / NOI_DefaultValues_Export FILENAME directory/filename

The server where Event Analytics is installed. The user name of the Event Analytics user. The password of the Event Analytics user. directory The directory where the file is stored. filename The name of the properties file. For example, ./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/seasonality.props. b) Update the properties file. Some property values are overwritten by the generated properties file, you might need to update other property values in the generated properties file. For a full list of effected properties, see “Generated properties file” on page 270. • If you do not have the following parameter values, update your properties file to reflect these parameter values.

history_datasource_name=ObjectServerHistoryMSSQLForNOI history_datatype_name=AlertsHistoryMSSQLTable history_database_table= history_database_type=MSSQL

c) Import the modified properties file into Netcool/Impact, enter the command.

nci_trigger / NOI_DefaultValues_Configure FILENAME directory/filename

For example,

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/seasonality.props

Adding columns to seasonal and related event reports You can add columns to seasonal event reports and related events reports by updating the rollup configuration.

Before you begin Make sure that the Historical Event database view that contains the data used to generate these reports contains the Acknowledged field. If this is not the case, then you must create a Historical Event database view and add the Acknowledged field to that view. In Netcool Operations Insight v1.4.1.2 and later (corresponding to Netcool/Impact v7.1.0.13 and later) it is recommended to use the Event Analytics Configuration Wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file.

About this task To update the rollup configuration, complete the following steps.

Procedure 1. Generate a properties file containing the latest Event Analytics system settings. a) Navigate to the directory $IMPACT_HOME/bin.

278 IBM Netcool Operations Insight: Integration Guide b) Run the following command to generate a properties file containing the latest Event Analytics system settings.

nci_trigger server_name username/password NOI_DefaultValues_Export FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Export is a Netcool/Impact policy that performs an export of the current Event Analytics system settings to a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/properties.props

2. Update the properties file that you generated. a. Specify the number of columns you want to add to the reports: Increase the value of the number_of_rollup_configuration=2 parameter for seasonal events. Increase the value of the reevent_number_of_rollup_configuration=2 parameter for related events. For example, to add one column to the reports, increase the parameter value by one from 2 to 3. b. For a new rollup column, add property information. • For a new Seasonal Event reports column, add the following properties.

rollup__column_name= rollup__display_name=_ rollup__type=

• For a new Related Events reports column, add the following properties.

reevent_rollup__column_name= reevent_rollup__display_name=_ reevent_rollup__type= reevent_rollup__actionable=

Specifies the new column rollup number. Specifies the new column name. The column name must match the column name in the history table. Specifies the new column display name. The display name must match the column name in the report. Specifies one of the following types: MAX The maximum value observed for the column. If no value is observed, the value defaults to the minimum value of an integer.

Chapter 5. Configuring 279 MIN The minimum value observed for the column. If no value is observed, the value defaults to the maximum value of an integer. SUM The sum of all of the values observed for the column. NON_ZERO A counting column that counts nonzero occurrences of events. This column can be useful to track the proportion of actioned events, or how many events had an associated ticket number. DISTINCT The number of distinct values that are seen for this key-value pair. EXAMPLE Displays the first non-blank example of a field that contained this key. The EXAMPLE type is useful when you are running seasonality on a field that can't be accessed, such as ALERT_IDENTIFIER, and you want an example human readable SUMMARY to demonstrate the type of problem. Note: You cannot change the property of a rollup column once the configuration has been updated. You must add a new rollup column and specify a different (with a new if you are keeping the old rollup). actionable= If this property is set to true for a rollup, the rollup is used to determine the probable root cause of a correlation rule. This root cause determination is based on the rollup that has the most actions that are taken against it. For example, if Acknowledge is part of your rollup configuration and has a property value of actionable=true, then the event with the highest occurrence of Acknowledge is determined to be the probable root cause. Probable root cause determination uses the descending order of the actionable rollups, that is, the first actionable rollup is a higher priority than the second actionable rollup. Only four of the possible keywords are valid for root cause: MAX, MIN, SUM, NON_ZERO. If this property is set to false for a rollup, the rollup is not used to determine the probable root cause of a rule. If all rollup configurations have a property value of actionable=false, the first event that is found is identified as the parent. To manually change a root cause event for a correlation rule, see “Selecting a root cause event for a correlation rule” on page 448. 3. Import the modified properties file into Event Analytics. a) Ensure you are in the directory $IMPACT_HOME/bin. b) Run the following command to perform an import of Event Analytics system settings from a designated properties file.

nci_trigger server_name username/password NOI_DefaultValues_Configure FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Configure is a Netcool/Impact policy that performs an import of Event Analytics system settings from a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file.

280 IBM Netcool Operations Insight: Integration Guide For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/properties.props

Results The rollup configuration is updated.

Example Example 1. To add a third column to the Seasonal Event report, change the rollup configuration value to 3, and add the properties.

number_of_rollup_configuration=3 rollup_1_column_name=SEVERITY rollup_1_display_name=SEVERITY_MIN rollup_1_type=MIN rollup_2_column_name=SEVERITY rollup_2_display_name=SEVERITY_MAX rollup_2_type=MAX rollup_3_column_name=TYPE rollup_3_display_name=TYPE_MAX rollup_3_type=MAX

Example 2. The configuration parameters for a default Related Events report.

reevent_rollup_1_column_name=ORIGINALSEVERITY reevent_rollup_1_display_name=ORIGINALSEVERITY_MAX reevent_rollup_1_type=MAX reevent_rollup_1_actionable=true reevent_rollup_2_column_name=ACKNOWLEDGED reevent_rollup_2_display_name=ACKNOWLEDGED_NON_ZERO reevent_rollup_2_type=NON_ZERO reevent_rollup_2_actionable=true reevent_rollup_3_column_name=ALERTGROUP reevent_rollup_3_display_name=ALERTGROUP_EXAMPLE reevent_rollup_3_type=EXAMPLE reevent_rollup_3_actionable=false

What to do next To add columns to the Seasonal Event reports, Historical Event portlet, Related Eventreports, or Related Event Details portlet complete the following steps: 1. Log in to the Tivoli Netcool/Impact UI. 2. Go to the Policies tab. 3. Open the policy that you want to modify. You can modify one policy at a time. • For Historical Events, open the SE_GETHISTORICALEVENTS policy. • For Seasonal Events, open the SE_GETEVENTDATA policy. • For related events groups, open one of the following policies. RE_GETGROUPS_ACTIVE RE_GETGROUPS_ARCHIVED RE_GETGROUPS_EXPIRED RE_GETGROUPS_NEW RE_GETGROUPS_WATCHED Note: Each policy is individually updated. To update two or more policies, you must modify each policy individually. • For related events, open one of the following policies. RE_GETGROUPEVENTS_ACTIVE RE_GETGROUPEVENTS_ARCHIVED

Chapter 5. Configuring 281 RE_GETGROUPEVENTS_EXPIRED RE_GETGROUPEVENTS_NEW RE_GETGROUPEVENTS_WATCHED Note: Each policy is individually updated. To update two or more policies, you must modify each policy individually. • For related events details group instances table, open the following policy: RE_GETGROUPINSTANCEV1 4. Click the Configure Policy Settings icon. 5. Under Policy Output Parameters, click Edit. 6. To create a custom schema definition, open the Schema Definition Editor icon. 7. To create a new field, click New. 8. Specify the new field name and format. The new field name must match the display name in the configuration file. The format must match the format in the AlertsHistory Table. The format must be appropriate for the rollup type added. For example, for numerical types such as SUM or NON_ZERO use a numeric format. Use String for DISTINCT, if the base column is String. Refresh the SE_GETHISTORICALEVENTS_Db2 table, or other database model, before you run Event Analytics with the added Historical Event table fields. For the RE_GETGROUPS_ policies, only rollup columns with a value of MAX, MIN, SUM, NON_ZERO are supported. Therefore, add only numeric fields to the schema. 9. To complete the procedure, click Ok on each of the open dialog boxes, and Save on the Policies tab. Note: Columns that are created for the Related Event Details before Netcool Operations Insight release 1.4.0.1 are displayed as designed. Configurations and groups that are created after you upgrade to Netcool Operations Insight release 1.4.0.1, display the events from the historical event. By adding columns to the Related Event Details, you can display additional information such as the owner ID or ticketnumber. You can also add the following columns for group instances in the Related Event Details: • SERVERSERIAL • SERVERNAME • TALLY • OWNERUID By default, the previously listed columns are hidden for group instances in the Related Event Details. To display these columns in the Related Event Details, you need to edit the Policy_RE_GETGROUPINSTANCEV1_RE_GETGROUPINSTANCEV1.properties file, which is located in the following directory: $IMPACT_HOME/uiproviderconfig/properties. Specifically, set the following properties in the Policy_RE_GETGROUPINSTANCEV1_RE_GETGROUPINSTANCEV1.properties file from their default values of true to the values false (or comment out the field or fields):

SERVERSERIAL.hidden=true SERVERNAME.hidden=true TALLY.hidden=true OWNERUID.hidden=true

For example,

OWNERUID.hidden=false

Or, for example,

#OWNERUID.hidden=true

282 IBM Netcool Operations Insight: Integration Guide Configuring event suppression Event suppression is available as an action when creating a seasonal event rule. You can configure event suppression by modifying the NOI_DefaultValues properties file.

Before you begin In Netcool Operations Insight v1.4.1.2 and later (corresponding to Netcool/Impact v7.1.0.13 and later) it is recommended to use the Event Analytics Configuration Wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. For more information on the relevant section of the wizard, see “Configuring event suppression” on page 267.

About this task To add details about suppressing and unsuppressing events, you must modify the NOI_DefaultValues properties file in the $IMPACT_HOME/bin directory.

Procedure 1. Log in to the server where IBM Tivoli Netcool/Impact is stored and running. 2. Generate a properties file containing the latest Event Analytics system settings. a) Navigate to the directory $IMPACT_HOME/bin. b) Run the following command to generate a properties file containing the latest Event Analytics system settings.

nci_trigger server_name username/password NOI_DefaultValues_Export FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Export is a Netcool/Impact policy that performs an export of the current Event Analytics system settings to a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/properties.props

3. Add the following lines of text to the properties file:

seasonality.suppressevent.column.name=SuppressEscl seasonality.suppressevent.column.type=NUMERIC seasonality.suppressevent.column.value=4 seasonality.unsuppressevent.column.name=SuppressEscl seasonality.unsuppressevent.column.type=NUMERIC seasonality.unsuppressevent.column.value=0

4. Import the modified properties file into Event Analytics. a) Ensure you are in the directory $IMPACT_HOME/bin. b) Run the following command to perform an import of Event Analytics system settings from a designated properties file.

nci_trigger server_name username/password NOI_DefaultValues_Configure FILENAME directory/filename

Where:

Chapter 5. Configuring 283 • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Configure is a Netcool/Impact policy that performs an import of Event Analytics system settings from a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/properties.props

Configuring event pattern processing You can configure how patterns are derived from related events by editing properties in the generated NOI Shared Configuration properties file.

Before you begin In Netcool Operations Insight v1.4.1.2 and later (corresponding to Netcool/Impact v7.1.0.13 and later) it is recommended to use the Event Analytics Configuration Wizard instead of the ./nci_trigger command to edit properties in the NOI Shared Configuration properties file. For more information on the relevant section of the wizard, see “Configuring event pattern processing” on page 267. Note: • You should perform this configuration task prior to running any related events configurations that use the global type properties associated with event pattern creation. It is expected that you will perform this configuration task only when something in your environment changes that affects where type information is found in events. • Avoid configuring multiple types for the same event. By default, Identifier is used to identify the same events. This can be overridden, but assuming the default, you should setup the type properties so that events identified by the same Identifier only have one type value. For example, if there are 10 events with Identifier=xxx and you want to use a type=ALERTGROUP then the events should have the same ALERTGROUP. If events for the same Identifier have many alert group values, the first one will be picked. The default NOI Shared Configuration properties file is divided into sections, where each section contains a number of properties that allow you to instruct how Netcool/Impact handles a variety of operations, such as how it should handle event pattern creation. There are three categories of event pattern creation properties defined in the NOI Shared Configuration properties file: • Properties related to configuring which table columns in the Historical Event Database that Netcool/ Impact should use in performing the event pattern analysis. • Properties related to configuring the default unique event identifier and event type in the Historical Event Database that you want Netcool/Impact to use when there is no match in the event type index related properties. • Properties related to configuring one or more event identity and event type indexes. Table 45 on page 284 describes the event pattern creation properties defined in the NOI Shared Configuration properties file. Use these descriptions to help you configure the values appropriate for your environment.

Table 45. Event pattern creation properties Global type Description Example Properties related to configuring table columns in the Historical Event Database

284 IBM Netcool Operations Insight: Integration Guide Table 45. Event pattern creation properties (continued) Global type Description Example type.resourcelist Specifies the name of the table The NOI Shared Configuration column or columns in the Historical properties file that you generate Event Database that Netcool/ with the nci_trigger command Impact should use in performing provides the following default the event pattern analysis. value:

type.resourcelist=NODE

Note: You should use the default value, NODE. type.servername.column Specifies the name of the table The NOI Shared Configuration column in the Historical Event properties file that you generate Database that contains the name of with the nci_trigger command the server associated with any provides the following default particular event that arrives in the value: Historical Event Database. type.servername.column= SERVERNAME

Note: You should use the default value, SERVERNAME, where possible. type.serverserial.column Specifies the name of the table The NOI Shared Configuration column in the Historical Event properties file that you generate Database that contains the server with the nci_trigger command serial number associated with any provides the following default particular event that arrives in the value: Historical Event Database. Note that the server serial number type.serverserial.column= SERVERSERIAL should be unique. Note: You should use the default value, SERVERSERIAL, where possible.

Properties related to configuring the default unique event identifier and event type in the Historical Event Database

Chapter 5. Configuring 285 Table 45. Event pattern creation properties (continued) Global type Description Example type.default.eventid This property contains the The NOI Shared Configuration database field in the Historical properties file that you generate Event Database that you want to with the nci_trigger command specify as the default Event provides the following default Identity. An Event Identity is a value: database field that identifies a unique event in the Historical Event type.default.eventid= IDENTIFIER Database. When you configure a related events configuration, you select database fields for the Event Identity from a drop-down list of available fields. In the User Interface, you perform this from the Advanced tab when you want to override the settings in the configuration file. Netcool/Impact uses the database field specified in this property as the default Event Identity when there is no match in the value specified in the type.index.eventid property. Note: The database field specified for this property should not contain a timestamp component.

286 IBM Netcool Operations Insight: Integration Guide Table 45. Event pattern creation properties (continued) Global type Description Example type.default.eventtype Specifies the default related events The NOI Shared Configuration type to use when creating an event properties file that you generate pattern to generalize. with the nci_trigger command provides the following default Netcool/Impact uses this default value: related events type when there is no match in type.default.eventtype= thetype.index.eventtype EVENTID property. Note: You choose the related events type values based on the fields for which you want to create a generalized pattern. For example, if you want to create a pattern and generalize it based on the EVENTID for an event, you would specify that value in this property. When the related events configuration completes and you create a pattern for generalization, the pattern generalization screen will contain a drop down menu that lists all of the EVENTIDs found in the Historical Event Database. You can then create a pattern/rule that will be applied to all EVENTIDs selected for that pattern. This means that you can expand the definition of the pattern to include all types, not just the types in the Related Events Group.

Properties related to configuring one or more event identity and event type indexes. You should specify values for each of the properties described in this section. Note: You can delete any of the additional event types by removing the relevant lines from this file. If the type is already being used in one or more analytics configurations then deleting the type will remove it from those configurations, and the default event type will be used. To ensure your analytics results are valid you should rerun the affected analytics configurations.

Chapter 5. Configuring 287 Table 45. Event pattern creation properties (continued) Global type Description Example type_number_of_ Specifies the number of types to The following example specifies type_configurations use in the NOI Shared two types for the global type Configuration properties file for the configuration: global type configuration. There is no limit on how many types you can type_number_of_ type_configurations=2 configure. Thus, you would define the other type.index related properties as follows. Note that the index numbering starts with 0 (zero).

type.0.eventid=Identifier type.0.eventtype=ACMEType type.0.filterclause= Vendor='ACME' type.0.osfilterclause= Vendor='ACME' type.0.typename=Vendor = Type0 type.1.eventid=SUMMARY, NODE type.1.eventtype= TAURUSType type.1.filterclause= Vendor = 'TAURUS' type.1.osfilterclause= Vendor = 'TAURUS' type.1.typename=Vendor = Type1 type.index.eventid Specifies the database field in the The following shows an example of Historical Event Database that you a database field used as the Event want to specify as the Event Identity: Identity. Multiple fields are separated by commas. type.0.eventid=SUMMARY

The following shows an example of multiple database fields used as the Event Identity:

type.0.eventid=NODE, SUMMARY, ALERTGROUP type.index.eventtype Specifies the event type to return The following example shows an for pattern generalization. event type to return for pattern generalization: Note: The returned event types display in the event type drop down type.0.eventtype=EVENTID menu in the pattern generalization screen.

288 IBM Netcool Operations Insight: Integration Guide Table 45. Event pattern creation properties (continued) Global type Description Example type.index.filterclause Specifies an Historical Event type.0.filterclause= Database filter that defines a set of Vendor = 'ACME' events. For the set of events defined by this filter, the event type will be found in the table column or columns in the type.index.eventtype property. Note: It is recommended that you create one or more database indexes on the reporter status table for the fields used in the type.index.filterclause to speed up the query. type.index.osfilterclause Specifies an ObjectServer filter to type.0.osfilterclause= filter matching event types. Vendor = 'ACME' Note: The filter that you specify for the type.index.osfilterclause property should be semantically identical to the filter that you specify for the type.index.filterclause property, except for this property you use the ObjectServer syntax. type.index.typename Specifies a user-defined name for type.0.typename= this event type. The name should Type0 be easily understandable as it will be used later to identify this event type when associating specific event types with an event analytics configuration. Note: You can rename any of the additional event types by modifying the relevant type.index.typename value. If the type is already being used in one or more analytics configurations then renaming the type will remove it from those configurations. You must then manually add the newly name type to each of the affected analytics configurations. To ensure your analytics results are fully synchronized you must rerun the affected analytics configurations.

Chapter 5. Configuring 289 About this task To configure the event pattern creation properties that Netcool/Impact uses for generalization, you must modify the default NOI Shared Configuration properties file in the /bin directory.

Procedure 1. Log in to the server where IBM Tivoli Netcool/Impact is stored and running. 2. Generate a properties file containing the latest Event Analytics system settings. a) Navigate to the directory $IMPACT_HOME/bin. b) Run the following command to generate a properties file containing the latest Event Analytics system settings.

nci_trigger server_name username/password NOI_DefaultValues_Export FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Export is a Netcool/Impact policy that performs an export of the current Event Analytics system settings to a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/properties.props

3. Go to the directory where you generated the NOI Shared Configuration properties file and open it for editing. 4. Create a backup copy of the generated NOI Shared Configuration properties file. 5. Using the editor of your choice open the generated NOI Shared Configuration properties file for editing. 6. Using the information about the event pattern creation properties described in Table 45 on page 284, specify values appropriate to your environment. Remember that the following properties have default values that you should not change: • type.resourcelist • type.servername.column • type.serverserial.column 7. After specifying appropriate values to the event pattern creation properties, write and then quit the NOI Shared Configuration properties file. 8. Import the modified properties file into Event Analytics. a) Ensure you are in the directory $IMPACT_HOME/bin. b) Run the following command to perform an import of Event Analytics system settings from a designated properties file.

nci_trigger server_name username/password NOI_DefaultValues_Configure FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user.

290 IBM Netcool Operations Insight: Integration Guide • password is the password of the Event Analytics user. • NOI_DefaultValues_Configure is a Netcool/Impact policy that performs an import of Event Analytics system settings from a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/properties.props

Example The following example sets the Event Identity, defines a set of events, and finds the type information in the specified table column or columns in the Historical Event Database:

type_number_of_type_configurations=1 type.0.eventid=NODE,SUMMARY,ALERTGROUP type.0.eventtype=ACMEType type.0.filterclause=( Vendor = 'ACME' ) type.0.osfilterclause=Vendor = 'ACME'

More specifically, the examples shows that if there is an event and the value for Vendor for that event is ACME, then look in the table column called ACMEType to find the event type. The following example expands on the previous example by showing two configurations (as indicated by the value 2 in the type_number_of_type_configurations property:

type_number_of_type_configurations=2 type.0.eventid=NODE type.0.eventtype=ACMEType type.0.filterclause=( Vendor = 'ACME' ) type.0.osfilterclause=Vendor = 'ACME' type.0.typename=Vendor = Type0 type.1.eventid=NODE,SUMMARY,ALERTGROUP type.1.eventtype=TAURUSType type.1.filterclause=( Vendor = 'TAURUS' ) type.1.osfilterclause=Vendor = 'TAURUS' type.1.typename=Vendor = Type1

Note: Netcool/Impact attempts to match each event to the filter defined in configuration 0 first. If the event matches the filter defined in configuration 0, then Netcool/Impact defines the event's type as defined in the filter. If the event does not match the filter defined in configuration 0, Netcool/Impact attempts to match the event to the filter defined in configuration 1. If the event matches the filter defined in configuration 1, then Netcool/Impact defines the event's type as defined in the filter. Netcool/Impact continues this processing sequence for as many configuration types you define. If no events match the filters defined in the defined configuration types you define, Netcool/Impact uses the default configuration to determine where type and identity are to be found.

Other configuration tasks Perform these configuration tasks to further configure your system.

Configuring the ObjectServer Prior to deploying rules based on related event events or patterns you must run SQL to update the ObjectServer. This SQL introduces relevant triggers into the ObjectServer to enable to rules to be fully functional.

About this task The SQL provides commands for creating and modifying ObjectServer objects and data. Complete the following steps to run the SQL to update the ObjectServer.

Chapter 5. Configuring 291 Procedure 1. Copy the SQL file IMPACT_HOME/add-ons/RelatedEvents/db/ relatedevents_objectserver.sql from Netcool/Impact into the tmp directory on your ObjectServer. 2. Run the SQL against your ObjectServer, enter the following command. On Windows, enter the command %OMNIHOME%\..\bin\redist\isql -U -P -S < C:\tmp\relatedevents_objectserver.sql On Linux and UNIX, enter the command $OMNIHOME/bin/nco_sql -user - password -server < /tmp/ relatedevents_objectserver.sql 3. If you have not previously configured the Event Analytics ObjectServer, you must enter the following command. On Windows, enter the command %OMNIHOME%\..\bin\redist\isql -U -P -S < C:\tmp\relatedevents_objectserver.sql On Linux and UNIX, enter the command $OMNIHOME/bin/nco_sql -user - password -server < /tmp/ relatedevents_objectserver.sql 4. All users must run the SQL against your ObjectServer, enter the following command. On Windows, enter the command %OMNIHOME%\..\bin\redist\isql -U -P -S < C:\tmp \relatedevents_objectserver_update_fp5.sql On Linux and UNIX, enter the command $OMNIHOME/bin/nco_sql -user - password -server < /tmp/ relatedevents_objectserver_update_fp5.sql

What to do next Event correlation for the related events function in Event Analytics, uses a ParentIdentifier column that is added the ObjectServer. If the size of this identifier field changes in your installation, you must change the value of the ParentIdentifier column within the ObjectServer SQL file that creates the event grouping automation relatedevents_objectserver.sql, to ensure that both values are the same. The updated SQL is automatically picked up. Related tasks Installing Netcool/OMNIbus and Netcool/Impact

Additional configuration for Netcool/Impact server failover for Event Analytics The following additional configuration is required if a seasonality report is running during a Netcool/ Impact node failover. Without this configuration the seasonality report might hang in the processing state following the failover. This will give the impression that the report is running, however it will remain stuck in the phase and percentage complete level that is displayed following the failover. Any queued reports will also not run. This is due to a limitation of the derby database. Use the workaround in this section to avoid this problem.

Procedure 1. Locate the jvm.options file in /wlp/usr/servers//. 2. Uncomment the following line in all the nodes of the failover cluster:

#-Xgc:classUnloadingKickoffThreshold=100

3. Restart all nodes in the failover Netcool/Impact cluster.

Results Following these changes, any currently running seasonality report will terminate correctly during the cluster failover and any queued reports will continue running after the failover has completed.

292 IBM Netcool Operations Insight: Integration Guide Configuring extra failover capabilities in the ObjectServer Related events use standard ObjectServer components to provide a high availability solution. These ObjectServer components require extra configuration to ensure high availability where there is an ObjectServer pair and the primary ObjectServer goes down before the cache on the Netcool/Impact node refreshes. In this scenario, if you deploy a correlation rule, the rule is picked up if you have replication setup between the ObjectServer tables. Otherwise, the new rule is not picked up and this state continues until you deploy another new rule. Complete the following steps to setup replication between the ObjectServer tables. • In the .GATE.map file, add the following lines.

CREATE MAPPING RE_CACHEMAP ( 'name' = '@name' ON INSERT ONLY, 'updates' = '@updates' );

• If your configuration does not use the standard StatusMap file, add the following line to the StatusMap file that you use to control alerts.status, you can find the StatusMap file in the .tblrep.def file.

'ParentIdentifier' = '@ParentIdentifier'

• In the .tblrep.def file, add the following lines.

REPLICATE ALL FROM TABLE 'relatedevents.cacheupdates' USING map 'RE_CACHEMAP';

For more information about adding collection ObjectServers and displaying ObjectServers to your environment, see the following topics within IBM Knowledge Center for IBM Tivoli Netcool/OMNIbus, Netcool/OMNIbus v8.1.0 Welcome page. Setting up the standard multitiered environment Configuring the bidirectional aggregation ObjectServer Gateway Configuring the unidirectional primary collection ObjectServer Gateway

Configuring extra failover capabilities in the Netcool/Impact environment Configure extra failover capabilities in the Netcool/Impact environment by adding a cluster to the Netcool/Impact environment. To do this you must update the data sources in IBM Tivoli Netcool/Impact.

About this task Update the following data sources when you add a cluster to the Netcool/Impact environment. seasonalReportDataSource RelatedEventsDatasource NOIReportDatasource Complete the following steps to update the data sources.

Procedure 1. In Netcool/Impact, go to the Database Failure Policy. 2. Select Fail over or Fail back depending on the high availability type you want. For more information, see the failover and failback descriptions. 3. Go to Backup Source. 4. Enter the secondary Impact Server's Derby Host Name, Port, and Database information.

Chapter 5. Configuring 293 Standard failover Standard failover is a configuration in which an SQL database DSA switches to a secondary database server when the primary server becomes unavailable and then continues by using the secondary until Netcool/Impact is restarted. Failback Failback is a configuration in which an SQL database DSA switches to a secondary database server when the primary server becomes unavailable and then tries to reconnect to the primary at intervals to determine whether it returned to availability.

What to do next If you encounter error ATKRST132E, see details in “Troubleshooting Event Analytics (on premises)” on page 671 If you want your Netcool/Impact cluster that is processing events to contain the same cache and update the cache, at or around the same time, you must run the file relatedevents_objectserver.sql with nco_sql. The relatedevents_objectserver.sql file contains the following commands.

create database relatedevents; create table relatedevents.cacheupdates persistent (name varchar (20) primary key, updates integer); insert into relatedevents.cacheupdates (name, updates) values ('RE_CACHE', 0);

Mapping customized field names Any customized field names in the Historical Event Database must be mapped to the corresponding standard field names in Netcool/Impact. You do this by creating a database view to map to the standard field names.

Before you begin You must perform this task if you have customized table columns in your Historical Event Database. For example, if you have defined columns called SUMMARYTXT and IDENTIFIERID instead of the default names SUMMARY and IDENTIFIER, you must perform this task. You create a database view and map back to the actual field names.

About this task The steps documented here are for a Db2 database. The procedure is similar for an Oracle database. To map customized columns in your Historical Event Database, complete the following steps.

Procedure 1. Use the following statement to create the view and point the data types to the new view.

DROP VIEW REPORTER_STATUS_STD; CREATE VIEW REPORTER_STATUS_STD AS SELECT SUMMARYTXT AS SUMMARY, IDENTIFIERID AS IDENTIFIER, * FROM REPORTER_STATUS;

2. Change the data types from REPORTER_STATUS to REPORTER_STATUS_STD. The data types for Db2 are AlertsHistoryDb2Table and SE_HISTORICALEVENTS_Db2 under ObjectServerHistoryDb2ForNOI data source. 3. Delete RELATEDEVENTS.RE_MAPPINGS records from the table:

DELETE FROM RELATEDEVENTS.RE_MAPPINGS WHERE TRUE;

4. Run the Event Analytics Configuration wizard to configure the Netcool/Impact properties to use for Event Analytics. 5. On the Historical event database configuration screen, connect to the database and then select the REPORTER_STATUS_STD (view) from the History table drop-down menu as the Table name for Event Analytics.

294 IBM Netcool Operations Insight: Integration Guide 6. When using any other columns that were mapped in the view, for example Summary for SUMMARYTXT, use the new value in any of the wizard screens. In this case use Summary. For example, when adding fields to the report in the Configure report fields screen, use the values mapped in the view (Identifier or Summary). 7. Save the Event Analytics configuration. You can now use the mapped fields for Event Analytics.

Customizing tables in the Event Analytics UI You can use the uiproviderconfig files to customize the tables in the Event Analytics UI. To customize how tables are displayed in the Event Analytics UI, you can update the properties and customization files that are specific to the policy or data type that you want to update. These files are stored in the properties and customization directories in the $IMPACT_HOME/uiproviderconfig/ directory. If you want to update the properties and customization files, make a backup of the files before you do any updates.

Configuring columns to display in the More Information panel You can configure the columns that you want to display in the More Information panel.

About this task The More Information panel can be started from within the Related Event Details portlet, when you click the hyperlink for either the Group Name or the Pivot Event, and the panel provides more details about the Group Name or the Pivot Event. The Event Analytics installation installs a default configuration of columns that display in the More Information panel, but you can change the configuration of columns that display. Complete the following steps to configure columns to display in the More Information panel.

Procedure 1. Generate a properties file containing the latest Event Analytics system settings. a) Navigate to the directory $IMPACT_HOME/bin. b) Run the following command to generate a properties file containing the latest Event Analytics system settings.

nci_trigger server_name username/password NOI_DefaultValues_Export FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Export is a Netcool/Impact policy that performs an export of the current Event Analytics system settings to a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/properties.props

2. Update the properties file with properties for columns you want to display in the More Information panel. • For columns related to the Group Name in the More Information panel, the following properties are the default properties in the properties file. You can add, remove, and change the default properties.

reevent_num_groupinfo=3 reevent_groupinfo_1_column=PROFILE

Chapter 5. Configuring 295 reevent_groupinfo_2_column=EVENTIDENTITIES reevent_groupinfo_3_column=INSTANCES

reevent_num_groupinfo=3 This property represents the number of group information columns to display. The default value is 3 columns. The value can be any number between 1 and 8, as eight columns are allowed. reevent_groupinfo_1_column=PROFILE Enter this property line item for each column. The variables in this property line item are 1 and PROFILE. 1 denotes that this column is your first column. This value can increment up to 8 per property line item, as eight columns are allowed. PROFILE represents the column. The following eight columns are allowed. PROFILE Specifies the relationship profile, or strength of the group. EVENTIDENTITIES Specifies a comma-separated list that creates the event identity. INSTANCES Specifies the total number of group instances. CONFIGNAME Specifies the configuration name under which the group was created. TOTALEVENTS Specifies the total number of events within the group. UNIQUEEVENTS Specifies the total number of unique events within the group. REVIEWED Specifies the review status of a group by a user. GROUPTTL Specifies the number of seconds the group will stay active after the first event occurs. • For columns related to the Pivot Event in the More Information panel, the following properties are the default properties in the properties file. You can add, remove, and change the default properties.

reevent_num_eventinfo=1 reevent_eventinfo_1_column=INSTANCES

reevent_num_eventinfo=1 This property represents the number of group information columns to display. The default value is 1 column. The value can be any number between 1 and 6, as six columns are allowed. reevent_eventinfo_1_column=INSTANCES Enter this property line item for each column. The variables in this property line item are 1 and INSTANCES. 1 denotes that this column is your first column. This value can increment up to 6 per property line item, as six columns are allowed. INSTANCES represents the column. The following six columns are allowed: INSTANCES Specifies the total number of instances for the related event. PROFILE Specifies the relationship profile, or strength of the related event. EVENTIDENTITY Specifies the unique event identity for the related event. EVENTIDENTITIES Specifies a comma-separated list that creates the event identity.

296 IBM Netcool Operations Insight: Integration Guide CONFIGNAME Specifies the configuration name under which the related event was created. GROUPNAME Specifies the group name under which the related event was created. 3. Import the modified properties file into Event Analytics. a) Ensure you are in the directory $IMPACT_HOME/bin. b) Run the following command to perform an import of Event Analytics system settings from a designated properties file.

nci_trigger server_name username/password NOI_DefaultValues_Configure FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Configure is a Netcool/Impact policy that performs an import of Event Analytics system settings from a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/properties.props

Configuring name similarity Configure the name similarity feature by tuning parameters that govern how the system processes multiple resources when performing pattern matching.

About this task Pattern matching enables Event Analytics to identify types of events that tend to occur together on a specific network resource. The name similarity feature extends pattern matching by enabling it to identify types of events that tend to occur together on more than one resource, where the resources within the pattern have a similar name. For examples of similar resource names that might be discovered by the name similarity feature, see “Examples of name similarity” on page 457. Depending on how name similarity is configured, pattern matching will see these resource names as similar and will create a single pattern including events from all of these resource names. Similarity threshold value: Algorithms are used to determine name similarity. First, an edit distance is calculated by a third-party algorithm. The edit distance is the minimum number of operations needed to transform one string into the other, where an operation is defined as an insertion, deletion, or substitution of a single character, or a transposition of two adjacent characters. Then, the algorithm calculates a normalized similarity distance, which lies in the range 0.0 to 1.0. In this range, 0.0 means that the strings are identical and 1.0 means that the strings are completely different. The normalized similarity distance is calculated by using a contribution of the edit distance weighted according to the first string length, the second string length, and the number of transpositions. Finally, the name similarity algorithm calculates a normalized threshold value (in the range 0.0 to 1.0) by subtracting the normalized similarity distance from the value 1.0. A threshold value of 0.0 means strings can be completely different. A threshold value of 1.0 means that strings must match exactly. By default name similarity is configured with values that should enable it to work effectively in most environments. Use this procedure to change these settings. Note: Only change name similarity settings if you understand the underlying algorithm.

Chapter 5. Configuring 297 Procedure 1. Generate a properties file containing the latest Event Analytics system settings. a) Navigate to the directory $IMPACT_HOME/bin. b) Run the following command to generate a properties file containing the latest Event Analytics system settings.

nci_trigger server_name username/password NOI_DefaultValues_Export FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Export is a Netcool/Impact policy that performs an export of the current Event Analytics system settings to a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/properties.props

2. Edit the properties file that you generated in the previous step. For example:

vi /tmp/properties.props

3. Find the section of the properties file that reads as follows: This code snippet shows the default values of the name similarity parameters.

################################################################################## # The following properties are used to configure Name Similarity NS ## ##################################################################################

name_similarity_feature_enable=true name_similarity_default_pattern_enable=false name_similarity_default_threshold=0.9 name_similarity_default_lead_restriction=1 name_similarity_default_tail_restriction=0

4. Update one or more of the name similarity settings. The following table describes each of these settings.

Table 46. Name similarity settings Parameter Description Values name_similarity_feature_enable Boolean that switches the name similarity Possible values: feature on or off. • true: Name Note: This is a global flag that governs all similarity is name similarity functionality. For example, switched on. if you set this flag to false, then no aspect • false: Name of name similarity will be enabled, and similarity is none of the other flags in this table will switched off. have any effect. Default value: true

298 IBM Netcool Operations Insight: Integration Guide Table 46. Name similarity settings (continued) Parameter Description Values name_similarity_default_pattern_ Boolean that specifies whether to apply Possible values: enable name similarity processing to historical • true: Apply name patterns, meaning patterns that were similarity created before name similarity was processing to introduced into the Netcool Operations historical patterns. Insight solution. Name similarity was introduced into Netcool Operations Insight • false: Do not in V1.5.0, which corresponds to Netcool/ apply name Impact fix pack 14. similarity processing to historical patterns. Default value: false name_similarity_default_threshol String comparison threshold value, where Possible values: 0 to d 0 equates to completely dissimilar strings, 1 inclusive and 1 equates to identical strings. The Default value: 0.9 value specified in the name_similarity_default_threshol d parameter, is used to determine whether two strings are similar. Note: The string similarity test is also governed by the lead and tail restriction parameters described below. name_similarity_default_lead_res Number of characters at the beginning of Default value: 1 triction the strings being compared that must be Note: This default identical. setting assumes that Important: If this number of characters is the front end of the not identical then the strings automatically strings being fail the similarity test. compared is usually different. name_similarity_default_tail_res Number of characters at the end of the Default value: 0 triction strings being compared that must be Note: This default identical. setting assumes that Important: If this number of characters is the tail end of the not identical then the strings automatically strings being fail the similarity test. compared is usually the same; for example ".com".

5. Import the modified properties file into Event Analytics. a) Ensure you are in the directory $IMPACT_HOME/bin. b) Run the following command to perform an import of Event Analytics system settings from a designated properties file.

nci_trigger server_name username/password NOI_DefaultValues_Configure FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed.

Chapter 5. Configuring 299 • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Configure is a Netcool/Impact policy that performs an import of Event Analytics system settings from a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/properties.props

Configuring multiple resource columns Learn how to configure multiple resource columns.

About this task The repattern_multiresource_correlation_logic parameter is used to determine how multiple resources are checked for inclusion in a pattern. It also controls how events with similar resources are grouped together in the Event Viewer. If resource values are similar, then the corresponding events are grouped together in a single event group in the Event Viewer. Within EA a similar resources can be any of the following: • Exact match: this is the default setting. The resource names must match exactly. • Regular expression: you can define a regular expression to group together resources that match the regular expression. • Name similarity: you can configure the system to use the name similarity mechanism. This mechanism determines whether two resource names are similar using a pattern matching algorithm that uses predefined parameters. For example, the first three characters in the resource name must be the same, or the last three characters in the resource name must be the same. The repattern_multiresource_correlation_logic parameter is configured with the OR value by default. Use this procedure to change the setting. Only change the repattern_multiresource_correlation_logic setting if you understand the effects that this change will have on how the resulting groups are presented in the Event Viewer. When OR logic is specified, it correlates two events by resource as soon as the pattern is met for just one resource. When AND logic is specified, only "Exact match" resource matching is used and the criteria must be met for all of the resource values. Note: If one resource field is selected per event type, the resource fields for each event type can be different. In this case AND logic is the same as OR logic. If more than one resource field is selected, the resource fields for each event type must be the same. Note: Suggested patterns only use one resource field. They are never generated with multiple resources.

Procedure 1. Generate a properties file containing the latest Event Analytics system settings. a) Navigate to the directory $IMPACT_HOME/bin. b) Run the following command to generate a properties file containing the latest Event Analytics system settings.

nci_trigger server_name username/password NOI_DefaultValues_Export FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user.

300 IBM Netcool Operations Insight: Integration Guide • password is the password of the Event Analytics user. • NOI_DefaultValues_Export is a Netcool/Impact policy that performs an export of the current Event Analytics system settings to a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/properties.props

2. Edit the properties file that you generated in the previous step. For example:

vi /tmp/properties.props

3. Find the section of the properties file that reads as follows: This code snippet shows the default values of the name similarity parameters.

repattern_multiresource_correlation_logic=OR

4. Update the repattern_multiresource_correlation_logic setting, as described in the following table. 5. Import the modified properties file into Event Analytics. a) Ensure you are in the directory $IMPACT_HOME/bin. b) Run the following command to perform an import of Event Analytics system settings from a designated properties file.

nci_trigger server_name username/password NOI_DefaultValues_Configure FILENAME directory/filename

Where: • server_name is the name of the server where Event Analytics is installed. • user name is the user name of the Event Analytics user. • password is the password of the Event Analytics user. • NOI_DefaultValues_Configure is a Netcool/Impact policy that performs an import of Event Analytics system settings from a designated properties file. • directory is the directory where the properties file is stored. • filename is the name of the properties file. For example:

nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/properties.props

Configuring Event Search Event search applies the search and analysis capabilities of Operations Analytics - Log Analysis to events that are monitored and managed by Tivoli Netcool/OMNIbus.

About Event Search Event Search provides an indexing capability to enable your operators to define index-based views of their event data. Events are transferred from the ObjectServer through the Gateway for Message Bus to Operations Analytics - Log Analysis, where they are ingested into a datasource and indexed for searching. After the events are indexed, you can search every occurrence of real-time and historical events. The Tivoli Netcool/OMNIbus Insight Pack is installed into Operations Analytics - Log Analysis and provides custom apps that search the events based on various criteria. The custom apps can generate dashboards that

Chapter 5. Configuring 301 present event information to show how your monitoring environment is performing over time. Keyword searches and dynamic drilldown functions allow you to go deeper into the event data for detailed information. The apps can be run from the Operations Analytics - Log Analysis. Tooling can be installed into the Web GUI that launches the apps from the right-click menus of the Event Viewer and the Active Event List. An "event reduction wizard" is also supplied that includes information and apps that can help you analyze and reduce volumes of events and minimize the "noise" in your monitored environment.

Required products and components You must have the following products and components installed in order to configure Event Search. Event Search requires the following products and components: • Operations Analytics - Log Analysis V1.3.3 or V1.3.5. For the system requirements of this product, including supported operating systems, see the following topics: – Operations Analytics - Log Analysis V1.3.5: https://www.ibm.com/support/knowledgecenter/ SSPFMY_1.3.5/com.ibm.scala.doc/install/ins-planning_for_installation.html . – Operations Analytics - Log Analysis V1.3.3: https://www.ibm.com/support/knowledgecenter/ SSPFMY_1.3.3/com.ibm.scala.doc/install/iwa_hw_sw_req_scen_c.html Note: Operations Analytics - Log Analysis Standard Edition is included in Netcool Operations Insight. For more information about Operations Analytics - Log Analysis editions, search for "Editions" at the Operations Analytics - Log Analysis Knowledge Center, at https://www.ibm.com/support/ knowledgecenter/SSPFMY . • OMNIbusInsightPack_v1.3.1 • Gateway for Message Bus V8.0 • Netcool/OMNIbus core components V8.1.0.22 fix pack 7 or later • Netcool/OMNIbus Web GUI V8.1.0.22 fix pack 5 or later For the system requirements of the core components and Web GUI for Netcool/OMNIbus V8.1.0.22, see https://ibm.biz/BdRNaT ,

Configuring integration with Operations Analytics - Log Analysis This section describes how to configure the integration of the Netcool/OMNIbus and Operations Analytics - Log Analysis products. Events are forwarded from Netcool/OMNIbus to Operations Analytics - Log Analysis by the Gateway for Message Bus.

Before you begin • Use a supported combination of product versions. For more information, see “Required products and components” on page 302. The best practice is to install the products in the following order: 1. Netcool/OMNIbus V8.1.0.22 and the Web GUI 2. Gateway for Message Bus. Install the gateway on the same host as the Netcool/OMNIbus product. 3. Operations Analytics - Log Analysis, see one of the following links: – V1.3.6: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.5/com.ibm.scala.doc/ install/iwa_install_ovw.html – V1.3.3: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.3/com.ibm.scala.doc/ install/iwa_install_ovw.html https://ibm.biz/BdXeZk 4. Netcool/OMNIbus Insight Pack, see “Installing the Tivoli Netcool/OMNIbus Insight Pack” on page 60. Tip: The best practice is to install the Web GUI and Operations Analytics - Log Analysis on separate hosts. Restriction: Operations Analytics - Log Analysis does not support installation in Group mode of IBM Installation Manager.

302 IBM Netcool Operations Insight: Integration Guide • Ensure that the ObjectServer that forwards event data to Operations Analytics - Log Analysis has the NmosObjInst column in the alerts.status table. NmosObjInst is supplied by default and is required for this configuration. You can use ObjectServer SQL commands to check for the column and to add it if it is missing, as follows. – Use the DESCRIBE command to read the columns of the alerts.status table. – Use the ALTER COLUMN setting with the ALTER TABLE command to add NmosObjInst to the alerts.status table. For more information about the alerts.status table, including the NmosObjInst column, see https:// ibm.biz/BdXcBF . For more information about ObjectServer SQL commands, see https://ibm.biz/ BdXcBX . • Configure the Web GUI server.init file as follows: Note: The default values do not have to be changed on Web GUI V8.1.0.22 fix pack 5 or later.

scala.app.keyword=OMNIbus_Keyword_Search scala.app.static.dashboard=OMNIbus_Static_Dashboard scala.datasource=omnibus scala.url=protocol://host:port scala.version=1.2.0.3

Restart the server if you change any of these values. See https://ibm.biz/BdXcBc . • Select and plan a deployment scenario. See “On-premises scenarios for Operations Management” on page 26. If your deployment uses the Gateway for Message Bus for forwarding events via the IDUC channel, you can skip step “5” on page 304. If you use the AEN client for forwarding events, complete all steps. • Start the Operations Analytics - Log Analysis product. • Familiarize yourself with the configuration of the Gateway for Message Bus. See https://ibm.biz/BdEQaD . Knowledge of the gateway is required for steps “1” on page 303, “5” on page 304, and “6” on page 305 of this task.

Procedure The term data source has a different meaning, depending on which product you configure. In the Web GUI, a data source is always an ObjectServer. In the Operations Analytics - Log Analysis product, a data source is a source of raw data, usually log files. In the context of the event search function, the Operations Analytics - Log Analysis data source is a set of Netcool/OMNIbus events. 1. Configure the Gateway for Message Bus. At a high-level, this involves the following: • Creating a gateway server in the Netcool/OMNIbus interfaces file • Configuring the G_SCALA.props properties file, including specifying the .map mapping file. • Configuring the endpoint in the scalaTransformers.xml file • Configuring the SSL connection, if required • Configuring the transport properties in the scalaTransport.properties file For more information about configuring the gateway, see the Gateway for Message Bus documentation at https://ibm.biz/BdEQaD . 2. If you are ingesting data that is billable, and do not want data ingested into the Netcool Operations Insight data source to be included in the usage statistics, you need to set the Netcool Operations Insight data source as non-billable. Add the path to your data source (default is NCOMS, see following step) to a seed file and restart Operations Analytics - Log Analysis as described in one of the following topics: • V1.3.6: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.5/com.ibm.scala.doc/ admin/iwa_nonbill_ds_t.html

Chapter 5. Configuring 303 • V1.3.3: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.3/com.ibm.scala.doc/ admin/iwa_nonbill_ds_t.html Note: Ensure you follow this step before you configure an "omnibus" data source for Netcool/ OMNIbus events. 3. In Operations Analytics - Log Analysis, start the Add Data Source wizard and configure an "omnibus" data source for Netcool/OMNIbus events. Only a single data source is required. The event management tools in the Web GUI support a single data source only. a) In the Select Location panel, select Custom and type the Netcool/OMNIbus server host name. Enter the same host name that was used for the JsonMsgHostname transport property of the Gateway for Message Bus. b) In the Select Data panel, enter the following field values:

Field Value File path NCOMS. This is the default value of the jsonMsgPath transport property of the Gateway for Message Bus. If you changed this value from the default, change the value of the File path field accordingly. Type This is the name of the data source type on which this data source is based. • To use the default data source type, specify OMNIbus1100. • To use a customized data source type, specify the name of the customized data source type; for example: customOMNIbus

Collection OMNIbus1100-Collection c) In the Set Attributes panel, enter the following field values:

Field Value Name omnibus. Ensure that the value that you type is the same as the value of the scala.datasource property in the Web GUI server.init file. If the Name field has a value other than omnibus, use the same value for the scala.datasource property. Group Leave this field blank. Description Type a description of your choice. 4. Configure access to the data source you set up in the previous step. This involves the following steps in the administrative settings for Operations Analytics - Log Analysis: a) Create a role using the Roles tab, for example, noirole, and ensure you assign the role permission to access the data source. b) Add a user, for example, noiuser, and assign the role you created that has permissions to access the data source (in this example, noirole). For information about creating and modifying users and roles in Operations Analytics - Log Analysis, see one of the following links: • V1.3.6: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.5/com.ibm.scala.doc/ config/iwa_config_pinstall_userrole_ovw_c.html • V1.3.3: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.3/com.ibm.scala.doc/ config/iwa_config_pinstall_userrole_ovw_c.html . Note: The contents of the Netcool/OMNIbus Insight Pack dashboards are empty unless you log in with a user that has a role assigned with permissions to access the data source. 5. Configure the Accelerated Event Notification (AEN) client:

304 IBM Netcool Operations Insight: Integration Guide a) Configure AEN event forwarding in the Gateway for Message Bus. b) Configure the AEN channel and triggers in each ObjectServer by enabling the following postinsert triggers and trigger group: • scala_triggers • scala_insert • scala_reinsert These items are included in the default configuration of the ObjectServer, as well as the SQL commands to configure the AEN channel, but they are disabled by default. For more information about configuring the AEN client in an integration with the Operations Analytics - Log Analysis product, search for Configuring event forwarding using AEN in the Gateway for Message Bus documentation. 6. Start the Gateway for Message Bus in SCA-LA mode. The gateway begins sending Netcool/OMNIbus events to Operations Analytics - Log Analysis. 7. Install the Web GUI with the event search feature.

Results After the configuration is complete, you can search for Netcool/OMNIbus events in Operations Analytics - Log Analysis. You can also use the Web GUI event management tools to launch into Operations Analytics - Log Analysis to display event data.

What to do next • Install any available interim fixes and fix packs for the Operations Analytics - Log Analysis product, which are available from IBM Fix Central at http://www.ibm.com/support/fixcentral/ . • You can customize event search in the following ways: – Change the Operations Analytics - Log Analysis index configuration. For more information, see “Customizing events used in Event Search using the DSV toolkit” on page 327. If you change the index configuration, also change the map file of the Gateway for Message Bus. After the map file is changed, restart the gateway. For more information, search for Map definition file at https://ibm.biz/ BdEQaD . – Customize the Operations Analytics - Log Analysis custom apps that are in the Insight Pack, or create new apps. For more information, see “Customizing the Apps” on page 329. – Customize the Web GUI event list tools. For more information, see “Customizing event management tools” on page 307. • If the Web GUI and Operations Analytics - Log Analysis are on the same host, configure single sign-on to prevent browser sessions expiring. See “Configuring single sign-on for the event search capability” on page 305.

Configuring single sign-on for the event search capability Configure single sign-on (SSO) between Web GUI and Operations Analytics - Log Analysis so that users can switch between the two products without having to log in each time.

Before you begin Before performing this task, ensure that the following requirements are met: • All server instances are in same domain; for example, domain_name.uk.ibm.com. • LTPA keys are the same across all server instances. • The LTPA cookie name that is used in Operations Analytics - Log Analysis must contain the string ltpatoken.

Chapter 5. Configuring 305 About this task First create dedicated users in your LDAP directory, which must be used by both Web GUI and Operations Analytics - Log Analysis for user authentication, and then configure the SSO connection.

Table 48. Quick reference for configuring single sign-on Step Action More information

1. Create the dedicated users and groups in your LDAP The LDAP groups that you want to use in directory. For example: Web GUI must have roles that Web GUI recognizes. For more information, see the 1. Create a new Organization Unit (OU) named following topic: Configuring user NetworkManagement. authentication for Web GUI against an LDAP 2. Under the NetworkManagement OU, create a new directory. group named webguildap. 3. Under the NetworkManagement OU, create the following new users: webgui1, webgui2, webgui3, and webgui4. 4. Add the new users to the webguildap group.

2. In the Web GUI, assign the ncw_admin and ncw_user For more information see the following roles to the webguildap group that you created in step topics: 1. • Assigning roles to Web GUI users and groups • Web GUI roles

3. Configure Dashboard Application Services Hub and For more information on configuring these Operations Analytics - Log Analysis to use the same products to use LDAP, see the following LDAP directory for authentication. topics: • Configuring Dashboard Application Services Hub to use LDAP • Configuring Operations Analytics - Log Analysis to use LDAP

4. Configure Dashboard Application Services Hub for For more information see the following topic: single sign-on. This enables users to access all of the Configuring Dashboard Application Services applications running in Dashboard Application Hub for single sign-on. Services Hub by logging in only once.

5. Configure the SSO connection from the Operations For more information see the following topic: Analytics - Log Analysis product to the Dashboard Configuring SSO for Operations Analytics - Application Services Hub instance in which the Web Log Analysis with Jazz for Service GUI is hosted. The following steps of the Operations Management Analytics - Log Analysis SSO configuration are important: • Export LTPA keys from the Jazz for Service Management server. • Update LA ldapRegistryHelper.properties file. • Run the LA ldapRegistryHelper.sh script. • Configure LTPA on the Liberty Profile for WAS (copy LTPA keys from Jazz)

306 IBM Netcool Operations Insight: Integration Guide Table 48. Quick reference for configuring single sign-on (continued) Step Action More information

6. Assign Operations Analytics - Log Analysis roles to the users and groups that you created in step 1.

7. In the $SCALAHOME/wlp/usr/servers/Unity/ server.xml/server.xml file, ensure that the element has a httpOnlyCookies="false" attribute. Add this line before the closing element. For example:

Where The httpOnlyCookies="false" attribute disables the httponly flag on the cookie that is generated by Operations Analytics - Log Analysis and is required to enable SSO with the Web GUI

Customizing event management tools The tools in the Event Viewer and AEL search for event data based on fields from the Netcool/OMNIbus ObjectServer. The fields are specified by URLs that are called when the Operations Analytics - Log Analysis product is started from the tools. You can change the URLs in the tools from the default event fields to search on fields of your choice. For example, the Search for similar events > 15 minutes before event tool filters events on the AlertGroup, Type, and Severity fields. The default URL is:

$(SERVER)/integrations/scala/Search?queryFields=AlertGroup,Type,Severity &queryValuesAlertGroup={$selected_rows.AlertGroup} &queryValuesType={CONVERSION($selected_rows.Type)} &queryValuesSeverity={CONVERSION($selected_rows.Severity)} &firstOccurrences={$selected_rows.FirstOccurrence} &timePeriod=15 &timePeriodUnits=minutes

About this task You can change the URLs in the following ways: • Change the scalaIntegration.xml configuration file and apply the changes with the Web GUI runwaapi command that is included in the Web GUI Administration API (WAAPI) client. • Change the tool configuration in the Web GUI Administration console page.

Procedure As an example, the following steps show how to use each method to change the URLs in the Search for similar events > 15 minutes before event tool to search on the AlertKey and Location event fields. • To change the URLs in the scalaIntegration.xml configuration file: a) In WEBGUI_HOME/extensions/LogAnalytics/scalaIntegration.xml, or the equivalent XML file if you use a different file, locate the following element:

b) Change the URL in this element as follows.

Chapter 5. Configuring 307 The changes are shown in bold text:

c) Use the runwaapi command to reinstall the tools:

runwaapi -file scalaIntegration.xml

d) Reinstall the following tool menus to the Event Viewer or AEL alerts menu item: – scalaStaticDashboard – scalaSimilarEvents – scalaEventByNode – scalaKeywordSearch • To change the URLs on the Administration page: a) In the Web GUI, click Administration > Event Management Tools > Tool Creation. Then, on the Tool Creation page, locate the scalaSearchByEvent15Minutes tool. b) Change the URL as follows. The changes are shown in bold text:

$(SERVER)/integrations/scala/Search?queryFields=AlertKey,Location &queryValuesAlertKey={$selected_rows.AlertKey} &queryValuesLocation={$selected_rows.Location} &firstOccurrences={$selected_rows.FirstOccurrence} &timePeriod=15 &timePeriodUnits=minutes

c) Refresh the Event Viewer or AEL so that the changes to the tool URL are loaded.

What to do next • The Gateway for Message Bus uses a lookup table to convert the Severity, Type, and Class event field integer values to strings. After a tool is changed or created, use the CONVERSION function to change these field values to the strings that are required by Operations Analytics - Log Analysis. • Change the other tools in the menu so that they search on the same field. It is more efficient to change the configuration file and then use the runwaapi command than to change each tool in the UI. The following table lists the names of the event management menu items and tools that are displayed in the Tool Creation and Menu Configuration pages.

Table 49. Web GUI menu and tool names Menu item Menu item name Tool Tool name Search for events by node scalaEventByNode 15 minutes before event scalaSearch ByNode15Minutes

308 IBM Netcool Operations Insight: Integration Guide Table 49. Web GUI menu and tool names (continued) Menu item Menu item name Tool Tool name 1 hour before event scalaSearch ByNode1Hour 1 day before event scalaSearch ByNode1Day 1 week before event scalaSearch ByNode1Week 1 month before event scalaSearch ByNode1Month 1 year before event scalaSearch ByNode1Year Custom ... scalaSearch ByNodeCustom Search for similar events scalaSimilarEvents 15 minutes before event scalaSearchByEvent15Minu tes 1 hour before event scalaSearchByEvent1Hour 1 day before event scalaSearchByEvent1Day 1 week before event scalaSearchByEvent1Week 1 month before event scalaSearchByEvent1Month 1 year before event scalaSearchByEvent1Year Custom ... scalaSearchByEventCustom Show event dashboard by scalaStaticDashboard 15 minutes before event scalaEventDistribution node ByNode15Minutes 1 hour before event scalaEventDistribution ByNode1Hour 1 day before event scalaEventDistribution ByNode1Day 1 week before event scalaEventDistribution ByNode1Week 1 month before event scalaEventDistribution ByNode1Month 1 year before event scalaEventDistribution ByNode1Year Custom ... scalaEventDistribution ByNodeCustom Show keywords and event scalaKeywordSearch 15 minutes before event scalaSetSearchFilter15Minu count tes

Chapter 5. Configuring 309 Table 49. Web GUI menu and tool names (continued) Menu item Menu item name Tool Tool name 1 hour before event scalaSetSearchFilter1Hour 1 day before event scalaSetSearchFilter1Day 1 week before event scalaSetSearchFilter1Week 1 month before event scalaSetSearchFilter1Month 1 year before event scalaSetSearchFilter1Year Custom ... scalaSetSearchFilterCustom • The Show event dashboard by node and Show keywords and event count tools start the OMNIbus Static Dashboard and OMNIbus Keyword Search custom apps in Operations Analytics - Log Analysis. For more information about customizing the apps, see “Customizing the Apps” on page 329.

Adding custom apps to the Table View toolbar To quickly launch custom apps, add them to the Table View toolbar of the Operations Analytics - Log Analysis UI. It is good practice to add the OMNIbus_Keyword_Search.app and OMNIbus_Static_Dashboard.app apps to the toolbar.

Procedure • To add the OMNIbus_Keyword_Search.app app, use a configuration that is similar to the following example:

{ "url": "https://hostname:9987/Unity/CustomAppsUI? name=OMNIbus_Keyword_Search&appParameters=[]", "icon": "https://hostname:9987/Unity/images/keyword-search.png", "tooltip": "OMNIbus Keyword Search" }

Where hostname is the fully qualified domain name of the Operations Analytics - Log Analysis host and keyword-search is the file name for a .png file that represents the app on the toolbar. Create your own .png file. • To add the OMNIbus_Static_Dashboard.app app, use a configuration that is similar to the following example:

{ "url": "https://hostname:9987/Unity/CustomAppsUI? name=OMNIbus_Static_Dashboard&appParameters=[]", "icon": "https://hostname:9987/Unity/images/dashboard.png", "tooltip": "OMNIbus Static Dashboard" }

Where hostname is the fully qualified domain name of the Operations Analytics - Log Analysis host and dashboard is the file name for a .png file that represents the app on the toolbar. Create your own .png file.

Customizing the Netcool/OMNIbus Insight Pack You can customize Event Search to your specific needs by customizing the Netcool/OMNIbus Insight Pack . For example, you might want to send an extended set of Netcool/OMNIbus event fields to Operations Analytics - Log Analysis and chart results based on those fields. Note: Do not directly modify the Netcool/OMNIbus Insight Pack. Instead, use it as a base for creating customized Insight Packs and Custom apps. Related concepts Operations Management tasks

310 IBM Netcool Operations Insight: Integration Guide Use this information to understand the tasks that users can perform using Operations Management.

Netcool/OMNIbus Insight Pack The Netcool/OMNIbus Insight Pack enables you to view and search both historical and real time event data from Netcool/OMNIbus in the IBM Operations Analytics - Log Analysis product. This documentation is for Tivoli Netcool/OMNIbus Insight Pack V1.3.0.2. The Insight Pack parses Netcool/OMNIbus event data into a format suitable for use by Operations Analytics - Log Analysis. The event data is transferred from Netcool/OMNIbus to Operations Analytics - Log Analysis by the IBM Tivoli Netcool/OMNIbus Gateway for Message Bus (nco_g_xml). For more information about the Gateway for Message Bus, see http://www-01.ibm.com/support/knowledgecenter/ SSSHTQ/omnibus/gateways/xmlintegration/wip/concept/xmlgw_intro.html .

Content of the Insight Pack The Insight Pack provides the following data ingestion artifacts: • A Rule Set (with annotator and splitter) that parses Netcool/OMNIbus event data into Delimiter Separated Value (DSV) format. • A Source Type that matches the event fields in the Gateway for Message Bus map file. • A Collection that contains the provided Source Type. • Custom apps, which are described in Table 50 on page 312. • A wizard to help you analyze and reduce event volumes, which is described in “Event reduction wizard” on page 314. The wizard also contains custom apps, which are described in Table 51 on page 315. Tip: The data that is shown by the custom apps originates in the alerts.status table of the Netcool/ OMNIbus ObjectServer. For example, the Node identifies the entities from which events originate, such as hosts or device names. For more information about the columns of the alerts.status table, see IBM Knowledge Center at http://www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/common/reference/omn_ref_tab_alertsstatus.html .

Custom apps The following table describes the custom apps. The apps are all launched from the Operations Analytics - Log Analysis UI. Some apps can also be launched from event lists in the Netcool/OMNIbus Web GUI, that is, the Event Viewer or Active Event List (AEL). The configuration for launching the tools from the Web GUI is not included in this Insight Pack. To obtain this configuration, install the latest fix pack of the Web GUI V8.1.

Chapter 5. Configuring 311 Table 50. Custom apps in the Netcool/OMNIbus Insight Pack Name and file name of app Can also Description be launched from Web GUI event list OMNIbus Static Dashboard Yes Opens a dashboard with charts that show the following event distribution information: OMNIbus_ Static_ • Event Trend by Severity Dashboard. • Event Storm by AlertGroup app • Event Storm by Node • Hotspot by Node and AlertGroup • Severity Distribution • Top 5 AlertGroups Distribution • Top 5 Nodes Distribution • Hotspot by AlertGroup and Severity The app searches against the specified data source, a time filter specified by the operator when they launch the tool, and the Node of the selected events. The app then generates charts based on the events returned by the search. Charts supplied by the Tivoli Netcool/OMNIbus Insight Pack have changed in V1.3.0.2. The charts now specify a filter of NOT PubType:U which ensures that each event is counted once only, even if deduplications occur. The exception is the keyword search custom app which searches all events, including modified ones. In the Operations Analytics - Log Analysis UI, the app requires data from a search result before it can run. If you do not run search before you run the apps, an error is displayed. 1. To run a new search, click Add search and specify the string that you want to search for. 2. A list of corresponding events is displayed in the search results. 3. In the left panel, click Search Dashboards > OMNIbusInsightPack and double-click Static Event Dashboard.

312 IBM Netcool Operations Insight: Integration Guide Table 50. Custom apps in the Netcool/OMNIbus Insight Pack (continued) Name and file name of app Can also Description be launched from Web GUI event list

OMNIbus Keyword Search Yes Uses information from the selected events to OMNIbus_ generate a keyword list with count, data source Keyword_ filter, and time filter in Operations Analytics - Search.app Log Analysis. The app generates the keyword list from the specified columns of the selected events. The default columns are Summary, Node, and AlertGroup. The app then creates the data source filter with the value specified by the event list tool and creates the time filter with the value that was selected when the tool was launched. In the Operations Analytics - Log Analysis UI, the app requires data from a search result before it can run. If you do not run search before you run the apps, an error is displayed. 1. To run a new search, click Add search and specify the string that you want to search for. 2. A list of corresponding events is displayed in the search results. Switch to the grid view and select the required entries. Click a column header to select the entire column. 3. In the left panel, click Search Dashboards > OMNIbusInsightPack and double-click Keyword Search. In the Search Patterns section, a list of keywords from the selected data is displayed. The event count associated with those keywords is in parentheses ().

Chapter 5. Configuring 313 Table 50. Custom apps in the Netcool/OMNIbus Insight Pack (continued) Name and file name of app Can also Description be launched from Web GUI event list

OMNIbus Dynamic Dashboard No Searches the events in the "omnibus" data OMNIbus_ source over the last day and generates a Dynamic_ dashboard with eight charts. The charts are Dashboard. similar to the charts generated by the OMNIbus app Static Dashboard app but they also support drill down. You can double-click any data point in the chart to open a search workspace that is scoped to the event records that make up that data point. To open the dashboard in the Operations Analytics - Log Analysis user interface, click Search Dashboards > OMNIbusInsightPack > Last_Day > Dynamic Event Dashboard. This dashboard is not integrated with the event lists in the Web GUI.

OMNIbus_ No Searches the events from the "omnibus" data Operational_ source over the last month and generates a Efficiency dashboard with the following charts. OMNIbus_ • Last Month - Top 10 AlertKeys: Shows the Operational_ AlertKeys that generated the most events, Efficiency. distributed by severity. app • Last Month - Top 10 AlertGroups: Shows the AlertGroups that generated the most events, distributed by severity. • Last Month - Top 10 Node: Shows the Nodes that generated the most events, distributed by severity. • Last Month - Hotspot by Node, Group, AlertKey: Combines the three other charts to show the Nodes, AlertGroups, and AlertKeys that generated the most events in a tree map. To open the dashboard in the Operations Analytics - Log Analysis user interface, click Search Dashboards > OMNIbusInsightPack > Last_Month > Operational Efficiency. This dashboard is not integrated with the event lists in the Web GUI.

Event reduction wizard The Event_Analysis_And_Reduction app is a guide to analyzing events in your environment and reducing event volumes. It consists of three sets of information and seven custom apps. The information is designed to help you understand the origin of high event volumes in your environment and create an action plan to reduce volumes. The information is in the first three nodes of the Event_Analysis_And_Reduction node on the UI: OMNIbus_Analyze_and_reduce_event_volumes,

314 IBM Netcool Operations Insight: Integration Guide OMNIbus_Helpful_links, and OMNIbus_Introduction_to_the_Apps. The seven custom apps analyze the origins of the high event volumes in your environment. They are described in the following table. For the best results, run the apps in the order that is given here. The wizard and the app that it contains can be run only from the Operations Analytics - Log Analysis UI.

Table 51. Custom apps in the Event_Analysis_And_Reduction wizard Name and file name of app Description OMNIbus_Show_Event Shows charts with five common dimensions for analyzing trends in event _1_Trend_Severity volumes over time: OMNIbus_Show_Event_ • Event trends by severity for the past hour, aggregated by minute. 1_Trend_Severity.app • Event trends by severity for the past day, aggregated by hour. • Event trends by severity for the past week, aggregated by day. • Event trends by severity for the past month, aggregated by week. • Event trends by severity for the past year, aggregated by month.

OMNIbus_Show_Event Analyzes events by node, that is, the entities from which events originate. _2_HotSpots_Node Examples include the source end point system, EMS or NMS, probe or gateway, and so on. You can modify this app to analyze the manager field, OMNIbus_Show_Events_ so that it shows the top event volumes by source system or integration. The 2_HotSpots_Node.app app has the following charts: • The 20 nodes with the highest event counts over the past hour. • The 20 nodes with the highest event counts over the past day. • The 20 nodes with the highest event counts over the past week. • The 20 nodes with the highest event counts over the past month. • The 20 nodes with the highest event counts over the past year.

OMNIbus_Show_Event Analyzes the origin of events by the classification that is captured in the _3_HotSpots_ AlertGroup field, for example, the type of monitoring agent, or situation. AlertGroup The app has the following charts: OMNIbus_Show_Events_ • The 20 AlertGroups with the highest event counts over the past hour. 3_HotSpots_AlertGrou • The 20 AlertGroups with the highest event counts over the past day. p. app • The 20 AlertGroups with the highest event counts over the past week. • The 20 AlertGroups with the highest event counts over the past month. • The 20 AlertGroups with the highest event counts over the past year.

OMNIbus_Show_Event_ Analyzes the origin of events by the classification that is captured in the 4_HotSpots_AlertKey AlertKey field, for example, the type of monitoring agent or situation. The OMNIbus_Show_Event_4 app has the following charts: _HotSpots_AlertKey.a • The 20 AlertKeys with the highest event counts over the past hour. pp • The 20 AlertKeys with the highest event counts over the past week. • The 20 AlertKeys with the highest event counts over the past month. • The 20 AlertKeys with the highest event counts over the past year.

Chapter 5. Configuring 315 Table 51. Custom apps in the Event_Analysis_And_Reduction wizard (continued) Name and file name of app Description

OMNIbus_Show_Event Shows the nodes with the highest event counts by event severity. The app _5_HotSpots_Node has the following charts: Severity • The 10 nodes with the highest event counts by event severity over the OMNIbus_Show_Event_ past hour. 5_HotSpots_Node Severity.app • The 10 nodes with the highest event counts by event severity over the past day. • The 10 nodes with the highest event counts by event severity over the past week. • The 10 nodes with the highest event counts by event severity over the past month. • The 10 nodes with the highest event counts by event severity over the past year.

OMNIbus_Show_Event Shows the nodes with the highest event counts by the classification in the _6_HotSpots_ AlertGroup field, for example, the type of monitoring agent or situation. The NodeAlert app has the following charts: Group • The 10 nodes with the highest event counts from the top 5 AlertGroups OMNIbus_Show_Event_ over the past hour. 6_HotSpots_ NodeAlert • 10 nodes with the highest event counts from the top 5 AlertGroups over Group.app the past day. • The 10 nodes with the highest event counts from the top 5 AlertGroups over the past week. • The 10 nodes with the highest event counts from the top 5 AlertGroups over the past month. • The 10 nodes with the highest event counts from the top 5 AlertGroups over the past year.

OMNIbus_Show_Event Shows the nodes with the highest event counts by the classification in the _7_HotSpots_NodeAlert AlertKey field, for example, the monitoring agent or situation. The app has Key the following charts: OMNIbus_Show_Event_ • 10 nodes with the highest event counts from the top 5 AlertKeys over the 7_HotSpots_NodeAlert past hour. Key. • 10 nodes with the highest event counts from the top 5 AlertKeys over the app past day. • 10 nodes with the highest event counts from the top 5 AlertKeys over the past week. • 10 nodes with the highest event counts from the top 5 AlertKeys over the past month. • 10 nodes with the highest event counts from the top 5 AlertKeys over the past year.

By default the custom apps include all events. To exclude certain events, for example, events that occur during maintenance windows, customise the search query used in the custom apps. For more information, see “Customizing the Apps” on page 329.

316 IBM Netcool Operations Insight: Integration Guide Checking the version of the Insight Pack To ensure compatibility between the versions of the Tivoli Netcool/OMNIbus Insight Pack, the Web GUI and the Operations Analytics - Log Analysis product, run the pkg_mgmt command to check which version of the Insight Pack is installed.

Procedure To check which version of the Insight Pack is installed, run the pkg_mgmt as follows:

$SCALA_HOME/utilities/pkg_mgmt.sh -list

Search the results for a line similar to the following example. In this example, V1.3.0.2 is installed. [packagemanager] OMNIbusInsightPack_v1.3.0.2 /home/myhome/IBM/LogAnalysis/ unity_content/OMNIbus

Event annotations The event annotations defined by the Insight Pack index configuration are described here. The following table lists the Netcool/OMNIbus event fields that are defined in the index configuration file. It also lists the index configuration attributes assigned to each field. These annotations are displayed in the Operations Analytics - Log Analysis Search workspace, and can be used to filter or search the events. Tip: The fields are not necessarily listed in Table 52 on page 317 in the same order as the data source properties file. If you need to know which order the fields are given, see the omnibus1100.properties file, which is in the docs directory of the Tivoli Netcool/OMNIbus Insight Pack. Best practice for filtering on annotations is as follows: • To avoid a negative impact on performance, set filterable attributes to true only on required fields. • Do not set the filterable attribute to true for fields with potentially long strings, for example, the Summary field.

Table 52. Event annotations Field Attributes LastOccurrence dataType: DATE retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

logRecord dataType: TEXT retrievable: true This is a default field required by Operations retrieveByDefault: true Analytics - Log Analysis. sortable: false filterable: false searchable: true

Class dataType: TEXT retrievable: true retrieveByDefault: true sortable: false filterable: false searchable: true

AlertGroup dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

Chapter 5. Configuring 317 Table 52. Event annotations (continued) Field Attributes Severity dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

AlertKey dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

Tally dataType: TEXT retrievable: true retrieveByDefault: true sortable: false filterable: false searchable: true

NmosObjInst dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

NodeAlias dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

timestamp dataType: DATE retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

Type dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: false searchable: true

Location dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

318 IBM Netcool Operations Insight: Integration Guide Table 52. Event annotations (continued) Field Attributes Identifier dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: false searchable: true

Node dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: true searchable: true

Summary dataType: TEXT retrievable: true retrieveByDefault: true sortable: true filterable: false searchable: true

OmniText dataType: TEXT retrievable: true This field contains a concatenated string of retrieveByDefault: true the event fields. By default, its value is: sortable: false filterable: false '@Manager' + ' ' + '@Agent' + ' ' searchable: true + TO_STRING('@Grade') In the default configuration, the names of the individual event fields contained in the OmniText string are not visible in search results.

PubType dataType: TEXT retrievable: true I when an event is inserted. retrieveByDefault: true sortable: false U when an existing entry is re-inserted or filterable: true updated. searchable: true

ServerName dataType: TEXT retrievable: true Corresponds to ServerName in retrieveByDefault: true alerts.status. sortable: false filterable: true searchable: true

ServerSerial dataType: LONG retrievable: true Corresponds to ServerSerial in retrieveByDefault: true alerts.status. sortable: false filterable: false searchable: true

Example queries The following examples show you how to issue search queries on events. • “Track changes to a specific event” on page 320 • “Search on one instance of an event only” on page 320

Chapter 5. Configuring 319 Track changes to a specific event Issue the following search to track changes to a specific event:

ServerSerial:NNNN AND ServerName:NCOMS

Where NNNN is the serial number and NCOMS is the name of the server.

Search on one instance of an event only Issue the following search to search on one instance of an event only:

NOT PubType:U

Customizing the events used in Event Search You can send an extended set of Netcool/OMNIbus event fields to Operations Analytics - Log Analysis and chart results based on those fields.

Customizing events used in Event Search using the addIndex.sh script Use the addIndex.sh script if you are running Netcool Operations Insight v1.4.1.2 or later versions.

Setting up and activating an initial set of custom events You must first create a datasource type containing the extended set of Netcool/OMNIbus event fields to send to Operations Analytics - Log Analysis, and then update the Netcool/OMNIbus datasource already defined in Operations Analytics - Log Analysis to use this type. You then modify and restart the Gateway for Message Bus and Event Search will then chart results based on the custom fields you specified.

Before you begin Ensure that the following prerequisites are in place before performing this task: • Operations Analytics - Log Analysis 1.3.3 or 1.3.5 is installed. • On the Operations Analytics - Log Analysis server, the $UNITY_HOME environment variable is set to the directory where Operations Analytics - Log Analysis is installed. • The DSV toolkit is installed in the $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4 directory and you have write access to the directory. • A version of python, which is compatible with the Operations Analytics - Log Analysis release, is installed and is on the system path. • Tivoli Netcool/OMNIbus Insight Pack V1.3.1 is installed on the Operations Analytics - Log Analysis server.

About this task You have added a new custom field to the ObjectServer alerts.status table and you want to update the datasource to send this field to Operations Analytics - Log Analysis along with the other fields, so that Event Search can present charts and dashboards using this new custom field. For the purposes of this task, we will assume that the new custom field stores a trouble ticket number, and is called TicketNumber.

Creating a custom data source type Create a custom data source type to include the new custom field in addition to the existing default fields.

About this task In Operations Analytics - Log Analysis a data source is an entity that enables Operations Analytics - Log Analysis to ingest data from a specific source. In order for Operations Analytics - Log Analysis to ingest data from Netcool/OMNIbus, a data source is required. A data source type is a template for a data source, and lists out the event fields to send to Operations Analytics - Log Analysis, together with relevant control parameters for each field. You can have multiple data source types, each set up with a different set of event fields; the advantage of this is that you can easily change the events in the data source using a predefined data source type.

320 IBM Netcool Operations Insight: Integration Guide The default datasource type is called OMNIbus1100, and this datasource type contains the default set of events that are sent to Operations Analytics - Log Analysis.

Procedure 1. Log into the Operations Analytics - Log Analysis server and open a terminal there. 2. Unzip the contents of Tivoli Netcool/OMNIbus Insight Pack V1.3.1 to a local directory. This procedure assumes that the archive has been unzipped to the following location:

/home/user/OMNIbusInsightPack_v1.3.1

3. Go to the docs sub-directory within the location to which you unzipped the file.

cd /home/user/OMNIbusINsightPack_v1.3.1/docs

4. Edit the omnibus1100_template.properties file using a text editor of your choice, for example, vi:

vi omnibus1100_template.properties

The omnibus1100_template.properties file contains index definitions corresponding to one or more fields to be sent using the data source. For the Netcool/OMNIbus data source all of the event fields must be indexed, so the omnibus1100_template.properties file contains an index entry for each event field to be sent to Operations Analytics - Log Analysis. 5. Add the new custom field to the end of the omnibus1100_template.properties file. The following code snippet shows the beginning of the file and the end of the file.

# Properties file controlling data source specification # Add new fields at the end of the file # 'moduleName' specifies the name of the data source. # Update the version number if you have created a data source with the same name previously and want # to upgrade it to add an additional field.

# ------# Insert new fields after this point. # Number each field sequentially, starting with 'field19'. # See the IBM Smart Cloud Log Analytics documentation for the DSV Toolkit for an explanation # of the field values. # ------#[field19_indexConfig] #name: #dataType: TEXT #retrievable: true #retrieveByDefault: true #sortable: false #filterable: true

The end of the file includes a commented out section which you can uncomment to add the new field. In that section, replace the name: attribute with the name of the field that you are adding. Here is what the end of file looks like when an index has been added for new custom field TicketNumber:

# ------# Insert new fields after this point. # Number each field sequentially, starting with 'field19'. # See the IBM Smart Cloud Log Analytics documentation for the DSV Toolkit for an explanation # of the field values. # ------[field19_indexConfig] name: Ticket_Number dataType: TEXT retrievable: true retrieveByDefault: true sortable: false filterable: true

Chapter 5. Configuring 321 Note: The order of indexes is important; it must match the order of values specified in the Gateway for Message Bus mapping file. This mapping file will be modified later in the procedure. For more information on the other attributes of the index, see the following Operations Analytics - Log Analysis topics: • 1.3.5: Editing an index configuration • 1.3.5: Example properties file with edited index configuration fields 6. Change the name of the new custom data source type that you are about to create. Within the [DSV] section of the omnibus1100_template.properties file find the attribute specification moduleName and change the value specified there. By default moduleName is set to CloneOMNIbus. You can change this to a more meaningful name; for example, customOMNIbus. 7. Save the omnibus1100_template.properties file and exit the file editor. 8. From within the /home/user/OMNIbusINsightPack_v1.3.1/docs directory, run the addIndex.sh script to create the new data source type.

addIndex.sh -i

9. Check that the data source type was created and installed onto the Operations Analytics - Log Analysis server by running the following command:

$UNITY_HOME/utilities/pkg_mgmt.sh -list

Where $UNITY_HOME is the Operations Analytics - Log Analysis home directory; for example, /home/ scala/IBM/LogAnalysis/.

Results The following two artifacts are also created. Store them in a safe place and make a note of the directory where you stored them, as you might need them later: Insight pack image archive By default this archive is called CloneOMNIbusInsightPack_v1.3.1.0.zip. If you followed the suggested example in this procedure, then this archive will be called customOMNIbusInsightPack_v1.3.1.0.zip. This archive contains the new custom data source type; you need a copy of this image if you ever want to delete it from the system in the future. The archive is located in the following directory:

/home/user/OMNIbusINsightPack_v1.3.1/dist

Template properties file This is the omnibus1100_template.properties file that you edited during this procedure. Keep a copy of this file in case you want to modify the data source type settings at a later time.

Creating a custom data source Create a new data source based on the custom data source type you created earlier.

Before you begin The Web GUI right click tools and Event Search dashboards and charts are coded to use a data source named omnibus. The prerequisite for this task varies depending on whether you have ever ingested data into the existing omnibus data source. • If you have already ingested data into the existing omnibus data source then you must delete the data. For information on how to do this, see Operations Analytics - Log Analysis 1.3.5 documentation: Deleting data . • If you have not yet ingested data into the existing omnibus data source then simply delete this data source.

322 IBM Netcool Operations Insight: Integration Guide Procedure 1. In Operations Analytics - Log Analysis, start the Add Data Source wizard and configure an "omnibus" data source for Netcool/OMNIbus events. Only a single data source is required. The event management tools in the Web GUI support a single data source only. a) In the Select Location panel, select Custom and type the Netcool/OMNIbus server host name. Enter the same host name that was used for the JsonMsgHostname transport property of the Gateway for Message Bus. b) In the Select Data panel, enter the following field values:

Field Value File path NCOMS. This is the default value of the jsonMsgPath transport property of the Gateway for Message Bus. If you changed this value from the default, change the value of the File path field accordingly. Type This is the name of the data source type on which this data source is based. • To use the default data source type, specify OMNIbus1100. • To use a customized data source type, specify the name of the customized data source type; for example: customOMNIbus

Collection OMNIbus1100-Collection c) In the Set Attributes panel, enter the following field values:

Field Value Name omnibus. Ensure that the value that you type is the same as the value of the scala.datasource property in the Web GUI server.init file. If the Name field has a value other than omnibus, use the same value for the scala.datasource property. Group Leave this field blank. Description Type a description of your choice. 2. Configure access to the data source you set up in the previous step. This involves the following steps in the administrative settings for Operations Analytics - Log Analysis: a) Create a role using the Roles tab, for example, noirole, and ensure you assign the role permission to access the data source. b) Add a user, for example, noiuser, and assign the role you created that has permissions to access the data source (in this example, noirole). For information about creating and modifying users and roles in Operations Analytics - Log Analysis, see one of the following links: • V1.3.6: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.5/com.ibm.scala.doc/ config/iwa_config_pinstall_userrole_ovw_c.html • V1.3.3: see https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.3/com.ibm.scala.doc/ config/iwa_config_pinstall_userrole_ovw_c.html . Note: The contents of the Netcool/OMNIbus Insight Pack dashboards are empty unless you log in with a user that has a role assigned with permissions to access the data source.

Chapter 5. Configuring 323 Modifying the Gateway for Message Bus mapping You must modify the Gateway for Message Bus mapping to include the new custom field or fields.

Before you begin The mapping that you configure in this task must match the order of fields configured in your custom data source type, as specified in “Creating a custom data source type” on page 320. You must collect the following information prior to performing this task: • Location of a Gateway for Message Bus mapping file that is compliant with the current version of the Netcool/OMNIbus Insight Pack. • Name and location of the Gateway for Message Bus properties file.

Procedure 1. Copy a Gateway for Message Bus mapping file that is compliant with the current version of the Netcool/OMNIbus Insight Pack. For example, for Netcool/OMNIbus Insight Pack V1.3.0.2 and above, you can copy the file $OMNIHOME/gates/xml/scala/xml1302.map. In the following example, the file is copied to a file called xmlCustom1302.map: cp $OMNIHOME/gates/xml/scala/xml1302.map $OMNIHOME/gates/xml/scala/xmlCustom1302.map

2. Add a command after the last entry in the CREATE MAPPING StatusMap section of the file. For example, if the last entry is a line specifying the ServerSerial field, then add a comma at the end of that line, like this:

'ServerSerial' = '@ServerSerial', );

3. For the purposes of this task, we assume that you are adding a new custom field that stores a trouble ticket number, and this field is called TicketNumber. Add this custom field after the last entry in the CREATE MAPPING StatusMap section of the Gateway for Message Bus mapping file, before the terminating parenthesis.

'ServerSerial' = '@ServerSerial', 'TicketNumber' = '@TicketNumber' );

4. Save the Gateway for Message Bus mapping file, xmlCustom1302.map. 5. Locate the Gateway for Message Bus properties file. By default this file is called G_SCALA.properties and it is located in the following directory:

$OMNIHOME/gates/xml/scala/G_SCALA.props

Where $OMNIHOME is /opt/IBM/tivoli/netcool/omnibus/. 6. Edit the Gateway for Message Bus properties file G_SCALA.properties using a text editor of your choice; for example, vi.

vi G_SCALA.properties

7. Change the Gate.MapFile parameter to refer to the new mapping file. For example:

Gate.MapFile :'$OMNIHOME/gates/xml/scala/xmlClone1302.map'

8. Save and close the Gateway for Message Bus properties file G_SCALA.properties. 9. Restart the Gateway for Message Bus.

324 IBM Netcool Operations Insight: Integration Guide Updating the set of custom events You can update the custom events used by Event Search.

Updating the custom data source type Update the custom data source type that you created earlier to include additional fields. All data sources based on this data source type will automatically update to include the fields that you added.

Before you begin You must have already created a custom data source type, as described in “Creating a custom data source type” on page 320.

About this task The following procedure describes how to add a field to an existing data source type. Restriction: You can add additional fields to the data source type at any time; however, once data has been ingested using a data source based on this type, you cannot modify or delete any of the added fields.

Procedure 1. Go to the following location: 2. Edit the omnibus1100_template.properties file using a text editor of your choice, for example, vi:

vi omnibus1100_template.properties

The omnibus1100_template.properties file contains index definitions corresponding to one or more fields to be sent using the data source. For the Netcool/OMNIbus data source all of the event fields must be indexed, so the omnibus1100_template.properties file contains an index entry for each event field to be sent to Operations Analytics - Log Analysis. 3. Add the new custom field to the end of the omnibus1100_template.properties file. The following code snippet shows the beginning of the file and the end of the file.

# Properties file controlling data source specification # Add new fields at the end of the file # 'moduleName' specifies the name of the data source. # Update the version number if you have created a data source with the same name previously and want # to upgrade it to add an additional field.

# ------# Insert new fields after this point. # Number each field sequentially, starting with 'field19'. # See the IBM Smart Cloud Log Analytics documentation for the DSV Toolkit for an explanation # of the field values. # ------#[field19_indexConfig] #name: #dataType: TEXT #retrievable: true #retrieveByDefault: true #sortable: false #filterable: true

The end of the file includes a commented out section which you can uncomment to add the new field. In that section, replace the name: attribute with the name of the field that you are adding. Here is what the end of file looks like when an index has been added for new custom field TicketNumber:

# ------# Insert new fields after this point. # Number each field sequentially, starting with 'field19'. # See the IBM Smart Cloud Log Analytics documentation for the DSV Toolkit for an explanation # of the field values.

Chapter 5. Configuring 325 # ------[field19_indexConfig] name: Ticket_Number dataType: TEXT retrievable: true retrieveByDefault: true sortable: false filterable: true

Note: The order of indexes is important; it must match the order of values specified in the Gateway for Message Bus mapping file. This mapping file will be modified later in the procedure. For more information on the other attributes of the index, see the following Operations Analytics - Log Analysis topics: • 1.3.5: Editing an index configuration • 1.3.5: Example properties file with edited index configuration fields 4. In the [DSV] section of the file, increase the value of the version parameter. 5. Save the omnibus1100_template.properties file and exit the file editor. 6. From within the /home/user/OMNIbusINsightPack_v1.3.1/docs directory, run the addIndex.sh script to update the data source type.

addIndex.sh -u

Restriction: You can add additional fields to the data source type at any time; however, once data has been ingested using a data source based on this type, you cannot modify or delete any of the added fields.

Modifying the Gateway for Message Bus mapping You must modify the Gateway for Message Bus mapping to include the new custom field or fields.

Before you begin The mapping that you configure in this task must match the order of fields configured in your custom data source type, as specified in “Creating a custom data source type” on page 320. You must collect the following information prior to performing this task: • Location of a Gateway for Message Bus mapping file that is compliant with the current version of the Netcool/OMNIbus Insight Pack. • Name and location of the Gateway for Message Bus properties file.

Procedure 1. Copy a Gateway for Message Bus mapping file that is compliant with the current version of the Netcool/OMNIbus Insight Pack. For example, for Netcool/OMNIbus Insight Pack V1.3.0.2 and above, you can copy the file $OMNIHOME/gates/xml/scala/xml1302.map. In the following example, the file is copied to a file called xmlCustom1302.map: cp $OMNIHOME/gates/xml/scala/xml1302.map $OMNIHOME/gates/xml/scala/xmlCustom1302.map

2. Add a command after the last entry in the CREATE MAPPING StatusMap section of the file. For example, if the last entry is a line specifying the ServerSerial field, then add a comma at the end of that line, like this:

'ServerSerial' = '@ServerSerial', );

3. For the purposes of this task, we assume that you are adding a new custom field that stores a trouble ticket number, and this field is called TicketNumber. Add this custom field after the last entry in the CREATE MAPPING StatusMap section of the Gateway for Message Bus mapping file, before the terminating parenthesis.

326 IBM Netcool Operations Insight: Integration Guide 'ServerSerial' = '@ServerSerial', 'TicketNumber' = '@TicketNumber' );

4. Save the Gateway for Message Bus mapping file, xmlCustom1302.map. 5. Locate the Gateway for Message Bus properties file. By default this file is called G_SCALA.properties and it is located in the following directory:

$OMNIHOME/gates/xml/scala/G_SCALA.props

Where $OMNIHOME is /opt/IBM/tivoli/netcool/omnibus/. 6. Edit the Gateway for Message Bus properties file G_SCALA.properties using a text editor of your choice; for example, vi.

vi G_SCALA.properties

7. Change the Gate.MapFile parameter to refer to the new mapping file. For example:

Gate.MapFile :'$OMNIHOME/gates/xml/scala/xmlClone1302.map'

8. Save and close the Gateway for Message Bus properties file G_SCALA.properties. 9. Restart the Gateway for Message Bus.

Customizing events used in Event Search using the DSV toolkit You must use the DSV toolkit to add events to Event Search if you are running Netcool Operations Insight v1.4.1.1 or lower, and are therefore using Tivoli Netcool/OMNIbus Insight Pack V1.3.0.2.

Before you begin You can use the DSV toolkit to generate a customized insight pack. The DSV toolkit is provided with the Operations Analytics - Log Analysis product, in $UNITY_HOME/unity_content/ DSVToolkit_v1.1.0.4. In the properties file, you can change the index configurations to meet your requirements A new source type with an updated index configuration is created when you install the insight pack. An insight pack contains the following elements: • An index configuration: defines how the fields are indexed in Operations Analytics - Log Analysis. • A splitter: splits the ingested data into individual log entries. • An annotator: splits the log entries into fields to be indexed. The Netcool/OMNIbus insight pack requires the newlineSplitter.aql custom splitter, and the insight pack can use an annotator built using the DSV toolkit. To modify the index configuration, and generate a data source to ingest the data with the new index, you need to create a new insight pack using the DSV toolkit and modify it to use the Netcool Operations Insight newlineSplitter.aql. • See the DSV toolkit documentation in the $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4/ docs directory for information about specifying field properties and generating insight packs. • See the Gateway for Message Bus documentation for information about mapping event fields to insight pack properties. Testing insight packs requires a Gateway for Message Bus to transfer events from Netcool/OMNIbus to Operations Analytics - Log Analysis.

About this task Use the DSV toolkit to generate an insight pack that contains a new rule set (annotator and splitter) for the Netcool/OMNIbus event fields that you want. The procedure describes how to create an insight pack called ITEventsInsightPack_V1.1.0.1, based on the Tivoli Netcool/OMNIbus Insight Pack V1.3.0.2. Use your own naming as appropriate.

Chapter 5. Configuring 327 Procedure 1. Make a copy of the omnibus1100.properties file, which is in the docs directory of the Tivoli Netcool/OMNIbus Insight Pack installation directory ($UNITY_HOME/unity_content/ OMNIbusInsightPack_v1.3.0.2), and rename it. For example, rename it to ITEvents.properties. 2. Copy the ITEvents.properties file that you created in step “1” on page 328 to the DSV toolkit directory $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4. 3. Edit the ITEvents.properties file. For example, change the default value of the aqlModuleName field to ITEvents, and add, modify, or remove event field properties as required. To obtain the version number V1.1.0.1, change the version property to 1.1.0.1. 4. If you added or removed fields from the file, change the value of the totalColumns field so that it specifies the total number of fields in the file. 5. Use the following command to generate an insight pack:

python dsvGen.py ITEvents.properties -o

The insight pack is named ITEventsInsightPack_V1.1.0.1. 6. Add the customized splitter into the insight pack as follows: a) Create the following directory for the splitter: $UNITY_HOME/unity_content/ DSVToolkit_v1.1.0.4/build/ITEventsInsightPack_v1.1.0.1/extractors/ruleset/ splitter b) Extract the Netcool/OMNIbus insight pack and copy the following file: Insight Pack Extract Directory/OMNIbusInsightPack_v1.3.0.2/extractors/ ruleset/splitter/newlineSplitter.aql to $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4/build/ ITEventsInsightPack_v1.1.0.1/extractors/ruleset/splitter/ c) Open the file $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4/build/ ITEventsInsightPack_v1.1.0.1/metadata/filesets.json and remove the following text: ,{"name":"ITEvents- Split","type":0,"fileType":0,"fileName":"Dsv.jar","className": "com.ibm.tivoli.unity.content.insightpack.dsv.extractor.splitter. DsvSplitter"} d) Open the file $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4/build/ ITEventsInsightPack_v1.1.0.1/metadata/ruleset.json and add the following text: [{"name":"ITEvents-Split","type":0,"rulesFileDirectory":"extractors\/ ruleset\/splitter"}] e) Open the file $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4/build/ ITEventsInsightPack_v1.1.0.1/metadata/sourcetypes.json and change the following text: "splitter":{"fileSet":"ITEvents-Split","ruleSet":null,"type":1} to "splitter":{"fileSet":null,"ruleSet":"ITEvents-Split","type":1} f) Go to $UNITY_HOME/unity_content/DSVToolkit_v1.1.0.4/build and compress the contents of the insight pack directory using the zip command utility to create a new insight pack. Ensure you run the command from the /build directory to preserve the directory structure in the resulting .zip file (in this example, the directory is /ITEventsInsightPack_v1.1.0.1, so the file would be ITEventsInsightPack_v1.1.0.1.zip). For example:

328 IBM Netcool Operations Insight: Integration Guide zip -r ITEventsInsightPack_v1.3.0.2.zip ITEventsInsightPack_v1.3.0.2

g) Install the insight pack using the $UNITY_HOME/utilities/pkg_mgmt.sh command as described in “Installing the Tivoli Netcool/OMNIbus Insight Pack” on page 60. 7. Test the insight pack: a) Create a temporary data source in Operations Analytics - Log Analysis for the new Source Type and Collection created by the DSV toolkit. b) Change the Gateway for Message Bus map file to match the fields that you defined in the ITEvents.properties file. Important: The order of the column entries must match exactly the order of the alert field entries in the gateway map file. See the Gateway for Message Bus documentation for information about configuring the map file. c) In the gateway scalaTransport.properties file, modify the values of the jsonMsgHostname and jsonMsgLogPath properties to match the attributes of the new data source that you created in step “7.a” on page 329. d) Test the new configuration. 8. Create a new data source called "omnibus" by using the new source type defined in the ITEvents Insight Pack. Important: You cannot rename an existing data source to the default name omnibus or use an existing data source that is named omnibus. You must delete the existing data source, then create the new data source and name it omnibus.

What to do next Test the new insight pack in the Operations Analytics - Log Analysis UI.

Customizing the Apps The following procedure describes how to generate customized versions of the Custom Apps provided with the Insight Pack.

Before you begin Consult the Operations Analytics - Log Analysis documentation for information about creating Custom Apps and building Insight Packs with the Eclipse-based Insight Pack Tooling. Testing Insight Packs requires a Gateway for Message Bus to transfer events from Netcool/OMNIbus to Operations Analytics - Log Analysis.

About this task The procedure is divided into two main parts: 1. Create a new Custom App, based on the Custom App provided with the Insight Pack. 2. Use the Operations Analytics - Log Analysis Eclipse-based Insight Pack Tooling to build a new Insight Pack that contains the new Custom App.

Procedure Create a new Custom App 1. Copy all the files in the $UNITY_HOME/AppFramework/Apps/ OMNIbusInsightPack_version_number directory to a new directory under $UNITY_HOME/ AppFramework/Apps/. For example, copy them to $UNITY_HOME/AppFramework/Apps/ITEvents. 2. In the $UNITY_HOME/AppFramework/Apps/ITEvents directory, rename all the App files from OMNIbus_*.app to NewName_*.app.

Chapter 5. Configuring 329 For example, rename OMNIbus_Event_Distribution.app to ITEvents_Distribution.app. 3. Modify ITEvents_Distribution.app as required. For example customizations, see “Example customization: OMNIbus Static Dashboard” on page 330. 4. Test the new Custom App. Build a new Insight Pack with the Eclipse Tooling 5. Import the Netcool/OMNIbus Insight Pack, $UNITY_HOME/unity_content/ OMNIbusInsightPack_version_number.zip, into the Insight Pack Tooling. 6. Use the Refactor > Rename command to create a new name for the Insight Pack, for example, ITEvents. 7. Import your Custom App files from the $UNITY_HOME/AppFramework/Apps/ITEvents directory to the src-files/unity_apps/apps Tooling directory. 8. When the files are successfully imported, remove the $UNITY_HOME/AppFramework/Apps/ ITEvents directory. 9. Build the Insight Pack. The new Insight Pack is named ITEventsInsightPack_V1.1.0.1.zip and is located in the dist Tooling directory. 10. Use the following command to install the new Insight Pack: pkg_mgtm.sh -install directory_path/ ITEventsInsightPack_V1.1.0.1.zip Where directory_path is the directory path to the dist Tooling directory.

What to do next Test the new Insight Pack and Custom App.

Example customization: OMNIbus Static Dashboard The following examples show you how to customize the OMNIbus Static Dashboard app. Note: Do not directly modify the custom apps supplied with the Insight Pack. Before you implement any of the following examples, create a custom app as described in “Customizing the Apps” on page 329. • “Removing a chart” on page 330 • “Adding a new chart” on page 331

Removing a chart To remove a chart from the event distribution dashboard, edit your custom .app file (for example, ITEvents_Distribution.app) and remove the corresponding JSON element. For example, to remove the Hotspot by Node and AlertGroup chart from the dashboard, remove the following element from the charts array:

{ "type": "Heat Map", "title": "Hotspot by Node and AlertGroup", "data": { "$ref": "AlertGroupVsNode" }, "parameters": { "yaxis": "AlertGroup", "xaxis": "Node", "category": "count" } },

When you run the App, the Hotspot by Node and AlertGroup chart is no longer available in the event dashboard.

330 IBM Netcool Operations Insight: Integration Guide Adding a new chart Use the following steps to add a new heat map that shows Hotspot by Node and Location: 1. Open your custom copy of the OMNIbus_Static_Dashboard.py file for editing. In the Create a new Custom App procedure described in “Customizing the Apps” on page 329, this file is in the $UNITY_HOME/AppFramework/Apps/ITEvents example directory. 2. To generate the search data, add the line formatted in bold type to the file:

dashboardObj.getSingleFieldCount(chartdata, querystring, 'Node', '5'); dashboardObj.getSingleFieldCount(chartdata, querystring, 'AlertGroup', '5'); dashboardObj.getSingleFieldCount(chartdata, querystring, 'Severity', '5'); dashboardObj.getSingleFieldVsTimestamp(chartdata, querystring, 'AlertGroup', '1'); dashboardObj.getSingleFieldVsTimestamp(chartdata, querystring, 'Severity', '10'); dashboardObj.getSingleFieldVsTimestamp(chartdata, querystring, 'Node', '1'); dashboardObj.getFieldaVsFieldb(chartdata, querystring, 'AlertGroup', 'Node', '5', '5'); dashboardObj.getFieldaVsFieldb(chartdata, querystring, 'Severity', 'AlertGroup', '5', '5'); dashboardObj.getFieldaVsFieldb(chartdata, querystring, 'Location', 'Node', '5');

3. Add the following JSON element to the end of the charts array in your custom .app file (for example, ITEvents_Distribution.app):

{ "type": "Heat Map", "title": "Hotspot by Node and Location", "data": { "$ref": "NodeVsLocation" }, "parameters": { "yaxis": "Node", "xaxis": "Location", "category": "count" } },

The following table lists the values allowed for each JSON object.

Object Values type Specifies the chart type. Depending on the functions that you add to OMNIbus_Static_Dashboard.py, the following chart types are available: • The getSingleFieldCount function enables the following types: Line Chart, Pie Chart, Simple Bar Chart, Point Chart. • The getSingleFieldVsTimeStamp function enables the following types: Bubble Chart, Bar Chart, Two Series Line Chart, Stacked Bar Chart, Heat Map, Cluster Bar, Stacked Line Chart. • The getFieldaVsFieldb function enables the following types: Bubble Chart, Bar Chart, Two Series Line Chart, Stacked Bar Chart, Heat Map, Cluster Bar, Stacked Line Chart.

title You can specify a title of your choice.

Chapter 5. Configuring 331 Object Values $ref Specifies the data reference. Depending on the functions that you add to OMNIbus_Static_Dashboard.py, the following data reference values are available: • The getSingleFieldCount function enables the following value: FieldNameCount • The getSingleFieldVsTimeStamp function enables the following value: FieldNameVsTime • The getFieldaVsFieldb function enables the following value: FieldNameAVsFieldNameB Where FieldName is case-sensitive and matches the event field name in the index configuration of your source type.

parameters Specifies the yaxis, xaxis, and categories chart parameters. Depending on the functions that you add to OMNIbus_Static_Dashboard.py, the following chart parameter values are available: • The getSingleFieldCount function enables the following values: FieldName, count. • The getSingleFieldVsTimeStamp function enables the following values: FieldName, date, count. • The getFieldaVsFieldb function enables the following values: FieldNameA, FieldNameB, count. Where FieldName is case-sensitive and matches the event field name in the index configuration of your source type.

When you run the App, the new Node over AlertGroup chart is displayed in the event dashboard.

Customizing dynamic dashboards You can create new dynamic dashboards, or customize the OMNIbus Dynamic Dashboard and OMNIbus_Operational_Efficiencyapps, and the apps in the Event_Analysis_And_Reduction wizard. You can change the data source and time range filters of these apps. For more information about creating new dynamic dashboards and customizing the apps, see the following topics, depending on your version of Operations Analytics - Log Analysis: • V1.3.5: https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.5/com.ibm.scala.doc/use/ scla_extend_create_dboard_t.html . • V1.3.3: https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.3/com.ibm.scala.doc/use/ scla_extend_create_dboard_t.html Important: Do not directly modify the custom apps that are supplied with the Insight Pack. Before you customize an app, create a custom app as described in “Customizing the Apps” on page 329. To avoid inconsistent results, change the data source or the time range filter for each chart in your custom .app file.

Configuring Topology Search The topology search capability is an extension of the Networks for Operations Insight feature. It applies the search and analysis capabilities of Operations Analytics - Log Analysis to give insight into network performance. Events that have been enriched with network data are analyzed by the Network Manager Insight Pack and are used to calculate the lowest-cost routes between two endpoints on the network topology over time. The events that occurred along the routes over the specified time period are identified

332 IBM Netcool Operations Insight: Integration Guide and shown by severity. The topology search requires the Networks for Operations Insight feature to be installed and configured. The topology search capability can plot the lowest-cost route across a network between two end points and display all the events that occur on the devices on the routes. The Network Manager IP Edition product enriches all the event data that is generated by the devices on the network topology. It is stored in Tivoli Netcool/OMNIbus, so that the Operations Analytics - Log Analysis product can cross-reference devices and events. The Gateway for Message Bus is used to pass event data from Tivoli Netcool/OMNIbus to Operations Analytics - Log Analysis. Also, the Network Manager Insight Pack reads topology data from the NCIM database in Network Manager IP Edition to identify the paths in the topology between the devices. The scope of the topology search capability is that of the entire topology network, which includes all NCIM domains. To restrict the topology search to a single domain, you can configure a properties file that is included in the Network Manager Insight Pack. After the Insight Pack is installed, you can run the apps from the Operations Analytics - Log Analysis UI. With Network Manager IP Edition installed and configured, the apps can also be run as right-click tools from the Network Views. With Tivoli Netcool/OMNIbus Web GUI installed and configured, the apps can be run as right-click tools from the Event Viewer and Active Event List (AEL). The custom apps use the network-enriched event data and the topology data from the Network Manager IP Edition NCIM database. They plot the lowest-cost routes across the network between two nodes (that is, network entities) and count the events that occurred on the nodes along the routes. You can specify different time periods for the route and events. The algorithm uses the speed of the interfaces along the routes to calculate the routes that are lowest-cost. That is, the fastest routes from start to end along which a packet can be sent. The network topology is based on the most recent discovery. Historical routes are not accounted for. If your network topology is changeable, the routes between the nodes can change over time. If the network is stable, the routes stay current.

Before you begin Ensure that you have a good knowledge of your network before you implement the topology search capability. Over large network topologies, the topology search can be performance intensive. It is therefore important to determine which parts of your network you want to use the topology search on. You can define those parts of the network into a single domain. Alternatively, implement the cross- domain discovery function in Network Manager IP Edition to create a single aggregation domain of the domains that you want to search. You can restrict the scope of the topology search to that domain or aggregation domain. For more information about deploying Network Manager IP Edition to monitor networks of small, medium, and larger networks, see https://www.ibm.com/support/knowledgecenter/ SSSHRK_4.2.0/install/concept/ovr_deploymentseg.html . For more information about the cross-domain discovery function, see https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/disco/task/ dsc_configuringcrossdomaindiscoveries.html .

Supported products and components The topology search capability is supported on a specific combination of products and components. Ensure that your environment has the requisite support before you enable topology search. These requirements apply to both new and upgraded environments. The topology search capability requires the following products and components. • Operations Analytics - Log Analysis V1.3 or later with the OMNIbusInsightPack_v1.3.1 and the NetworkManagerInsightPack_V1.3.0.0. • Tivoli Netcool/OMNIbus Core V8.1.0.2 and Tivoli Netcool/OMNIbus Web GUI V8.1.0.2 or later. Install the Install tools and menus for event search with IBM SmartCloud Analytics - Log Analysis feature as part of the Web GUI installation. • Gateway for Message Bus package version 6.0 or later. Earlier package versions do not include the configurations that are required for the topology search capability. • Network Manager V4.1.1.1 or later. The topology search capability requires that the NCIM database for the network topology is IBM Db2 9.7 or 10.1. Oracle 10g or 11g is also supported, but requires more

Chapter 5. Configuring 333 configuration than Db2. Although the Network Manager product supports other databases for storing the topology, the topology search capability is supported only on these databases.

Network Manager Insight Pack The Network Manager Insight Pack reads event data and network topology data so that it can be searched and visualized in the IBM Operations Analytics - Log Analysis product. The Network Manager IP Edition product enriches all the event data that is generated by the devices on the network topology. It is stored in Tivoli Netcool/OMNIbus, so that the Operations Analytics - Log Analysis product can cross-reference devices and events. The Gateway for Message Bus is used to pass event data from Tivoli Netcool/OMNIbus to Operations Analytics - Log Analysis. Also, the Network Manager Insight Pack reads topology data from the NCIM database in Network Manager IP Edition to identify the paths in the topology between the devices. The scope of the topology search capability is that of the entire topology network, which includes all NCIM domains. To restrict the topology search to a single domain, you can configure a properties file that is included in the Network Manager Insight Pack.

Content of the Insight Pack The data ingestion artifacts that are included in the Network Manager Insight Pack. • Custom apps, which are described in Table 53 on page 335. A rule set, source type, and collection are provided in the OMNIbusInsightPack_v1.3.1, which the Network Manager Insight Pack uses.

Custom apps The following table describes the custom apps in the Insight Pack. After the Insight Pack is installed, you can run the apps from the Operations Analytics - Log Analysis UI. With Network Manager IP Edition installed and configured, the apps can also be run as right-click tools from the Network Views. With Tivoli Netcool/OMNIbus Web GUI installed and configured, the apps can be run as right-click tools from the Event Viewer and Active Event List (AEL). The custom apps use the network-enriched event data and the topology data from the Network Manager IP Edition NCIM database. They plot the lowest-cost routes across the network between two nodes (that is, network entities) and count the events that occurred on the nodes along the routes. You can specify different time periods for the route and events. The algorithm uses the speed of the interfaces along the routes to calculate the routes that are lowest-cost. That is, the fastest routes from start to end along which a packet can be sent. The network topology is based on the most recent discovery. Historical routes are not accounted for. If your network topology is changeable, the routes between the nodes can change over time. If the network is stable, the routes stay current. The apps count the events that occurred over predefined periods of time, relative to the current time, or over a custom time period that you can specify. For the predefined time periods, the current time is calculated differently, depending on which product you run the apps from. Network Manager IP Edition uses the current time stamp. The Tivoli Netcool/OMNIbus Web GUI uses the time that is specified in the FirstOccurrence field of the events.

334 IBM Netcool Operations Insight: Integration Guide Table 53. Custom apps that are included in the Network Manager Insight Pack Custom app name and file name Description Find alerts between two This app shows the distribution of alerts on the nodes on layer 2 topology least-cost routes between two network end points in a layer 2 topology. Charts show the alert NM_Show_Alerts_Between_Two distribution by severity and alert group for each _Nodes_Layer2.app route over the specified time period. The ObjectServer field for the alert group is AlertGroup. A list of the routes is displayed from which you can search the events that occurred on each route over the specified time period. In the Operations Analytics - Log Analysis UI, the app requires search results before you can run it. In the search results, select the NmosObjInst column and then run the app. The app finds the events between the 2 nodes on which each selected event originated.

Find alerts between two This app shows the distribution of alerts on the nodes on layer 3 topology least-cost routes between two network end points NM_Show_Alerts_Between_Two in a layer 3 topology. Charts show the alert _Nodes_Layer3.app distribution by severity and alert group for each route over the specified time period. The ObjectServer field for the alert group is AlertGroup. A list of the routes is displayed from which you can search the events that occurred on each route over the specified time period. In the Operations Analytics - Log Analysis UI, the app requires search results before you can run it. In the search results, select the NmosObjInst column and then run the app. The app finds the events between the 2 nodes on which each selected event originated.

Configuring topology search Before you can use the topology search capability, configure the Tivoli Netcool/OMNIbus core and Web GUI components, the Gateway for Message Bus and Network Manager IP Edition.

Before you begin Set up the environment for each product as follows: • Configure the event search capability, including the Gateway for Message Bus. See “Configuring integration with Operations Analytics - Log Analysis” on page 302. The topology search capability requires that the Gateway for Message Bus is configured to forward event data to Operations Analytics - Log Analysis. • If your Operations Analytics - Log Analysis is upgraded from a previous version, migrate the data to your V1.3 instance. See one of the following topics: – Operations Analytics - Log Analysis V1.3.6: https://www.ibm.com/support/knowledgecenter/ SSPFMY_1.3.5/com.ibm.scala.doc/admin/iwa_admin_backup_restore.html – Operations Analytics - Log Analysis V1.3.3: https://www.ibm.com/support/knowledgecenter/ SSPFMY_1.3.3/com.ibm.scala.doc/admin/iwa_admin_backup_restore.html • Ensure that the ObjectServer that forwards event data to Operations Analytics - Log Analysis has the NmosObjInst column in the alerts.status table. NmosObjInst is supplied by default and is required for

Chapter 5. Configuring 335 this configuration. You can use ObjectServer SQL commands to check for the column and to add it if it is missing, as follows. – Use the DESCRIBE command to read the columns of the alerts.status table. – Use the ALTER COLUMN setting with the ALTER TABLE command to add NmosObjInst to the alerts.status table. For more information about the alerts.status table, including the NmosObjInst column, see https:// ibm.biz/BdXcBF . For more information about ObjectServer SQL commands, see https://ibm.biz/ BdXcBX . • Configure the Tivoli Netcool/OMNIbus Web GUI V8.1.0.4 as follows: – Install the Netcool Operations Insight Extensions for IBM Tivoli Netcool/OMNIbus Web GUI package. IBM Installation Manager installs this package separately from the Web GUI. It needs to be explicitly selected. – Check the server.init file to ensure that the scala* properties are set as follows:

scala.app.keyword=OMNIbus_Keyword_Search scala.app.static.dashboard=OMNIbus_Static_Dashboard scala.datasource=omnibus scala.url=protocol://host:port scala.version=1.2.0.3

This configuration needed for new environments and for environments that are upgraded from versions of Operations Analytics - Log Analysis that are earlier than 1.2.0.3. – Set up the Web GUI Administration API client, which is needed to install the event list tooling that launches Operations Analytics - Log Analysis. See http://www-01.ibm.com/support/ knowledgecenter/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/ web_con_setwaapiuserandpw.html . • Install and configure the Insight Packs as follows: 1. Install the OMNIbusInsightPack_v1.3.1. If your environment is upgraded from a previous version of Netcool Operations Insight, upgrade to this version of the Insight Pack. See “Netcool/OMNIbus Insight Pack” on page 311. 2. Create a data source. 3. Obtain and install the Network Manager Insight Pack V1.3.0.0. See “Installing the Network Manager Insight Pack” on page 89.

Procedure 1. In $NCHOME/omnibus/extensions, run the nco_sql utility against the scala_itnm_configuration.sql file.

./nco_sql -user root -password myp4ss -server NCOMS < /opt/IBM/tivoli/netcool/omnibus/extensions/scala/scala_itnm_configuration.sql

Triggers are applied to the ObjectServer that delay the storage of events until the events are enriched by Network Manager IP Edition data from the NCIM database. 2. If the Gateway for Message Bus is not configured to forward event data to Operations Analytics - Log Analysis, perform the required configurations.4

4 At a high-level, this involves the following: • Creating a gateway server in the Netcool/OMNIbus interfaces file • Configuring the G_SCALA.props properties file, including specifying the xml1302.map • Configuring the endpoint in the scalaTransformers.xml file • Configuring the SSL connection, if required • Configuring the transport properties in the scalaTransport.properties file

336 IBM Netcool Operations Insight: Integration Guide 3. Install the tools and menus to launch the custom apps of the Network Manager Insight Pack in the Operations Analytics - Log Analysis UI from the Web GUI. In $WEBGUI_HOME/extensions/LogAnalytics, run the runwaapi command against the scalaEventTopology.xml file.

$WEBGUI_HOME/waapi/bin/runwaapi -user username -password password -file scalaEventTopology.xml

Where username and password are the credentials of the administrator user that are defined in the $WEBGUI_HOME/waapi/etc/waapi.init properties file that controls the WAAPI client. 4. On the host where the Network Manager GUI components are installed, install the tools and menus to launch the custom apps of the Network Manager Insight Pack in the Operations Analytics - Log Analysis GUI from the Network Views. a) In $NMGUI_HOME/profile/etc/tnm/topoviz.properties, set the topoviz.unity.customappsui property, which defines the connection to Operations Analytics - Log Analysis. For example:

# Defines the LogAnalytics custom App launcher URL topoviz.unity.customappsui=https://server3:9987/Unity/CustomAppsUI

b) In the $NMGUI_HOME/profile/etc/tnm/menus/ncp_topoviz_device_menu.xml file, define the Event Search menu item. Add the item

in the file as shown:

5. Start the Gateway for Message Bus in Operations Analytics - Log Analysis mode. For example:

$OMNIHOME/bin/nco_g_xml -propsfile $OMNIHOME/etc/G_SCALA.props

The gateway begins sending events from Tivoli Netcool/OMNIbus to Operations Analytics - Log Analysis.

What to do next • Configure single sign-on (SSO) between the products. • Reconfigure your views in the Web GUI to display the NmosObjInst column. The tools that launch the custom apps of the Network Manager Insight Pack work only against events that have a value in this column. See http://www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_cust_settingupviews.html . Related tasks Installing the Network Manager Insight Pack This topic explains how to install the Network Manager Insight Pack into the Operations Analytics - Log Analysis product and make the necessary configurations. The Network Manager Insight Pack is required only if you deploy the Networks for Operations Insight feature and want to use the topology search capability. For more information, see “Network Manager Insight Pack” on page 334. Operations Analytics - Log Analysis can be running while you install the Insight Pack.

Configuring single sign-on for the topology search capability Configure single sign-on (SSO) between the Dashboard Application Services Hub that hosts the Network Manager IP Edition GUI components and Operations Analytics - Log Analysis so that users can switch between the two products without having to log in each time. First, create dedicated users in your LDAP directory, which must be used by both products for user authentication, and then configure the SSO connection.

Chapter 5. Configuring 337 Procedure 1. Create the dedicated users and groups in your LDAP directory. For example: a. Create a new Organization Unit (OU) named NetworkManagement. b. Under the NetworkManagement OU, create a new group named itnmldap. c. Under the NetworkManagement OU, create the following new users: itnm1, itnm2, itnm3, and itnm4. d. Add the new users to the itnmldap group. 2. In Dashboard Application Services Hub, assign the itnmldap group that you created in step “1” on page 338 to a Network Manager IP Edition user group that can access the Network Views. Network Manager IP Edition user roles are controlled by assignments to user groups. Possible user groups that can access the Network Views are Network_Manager_IP_Admin and Network_Manager_User. 3. Configure the SSO connection from the Operations Analytics - Log Analysis product to the Dashboard Application Services Hub instance in which Network Manager IP Edition is hosted. For more information about configuring SSO for Operations Analytics - Log Analysis, see the Operations Analytics - Log Analysis documentation. The following steps of the Operations Analytics - Log Analysis SSO configuration are important: • Assign Operations Analytics - Log Analysis roles to the users and groups that you created in step “1” on page 338. • In the $SCALAHOME/wlp/usr/servers/Unity/server.xml/server.xml file, ensure that the element has a httpOnlyCookies="false" attribute. Add this line before the closing element. For example:

The httpOnlyCookies="false" attribute disables the httponly flag on the cookie that is generated by Operations Analytics - Log Analysis and is required to enable SSO with Network Manager IP Edition GUI.

Configuring Topology Search You can configure and customize the Network Manager Insight Pack to meet your requirements. Related concepts Network Management tasks Use this information to understand the tasks that users can perform using Network Management.

Verifying that the Insight Pack was installed Run the pkg_mgmt command to verify that the Insight Pack was installed, or check the Operations Analytics - Log Analysis.

Procedure • Run the pkg_mgmt as follows:

$SCALA_HOME/utilities/pkg_mgmt.sh -list

Search the results for a line similar to the following example. In this example, V1.3.0.0 is installed. [packagemanager] NetworkManagerInsightPack_V1.3.0.0 /home/myhome/IBM/ LogAnalysis/unity_content/NetworkManager • On the Operations Analytics - Log Analysis UI, check for the following items in the Search Dashboards area: NetworkManagerInsightPack > Find events between two nodes on layer 3 topology or NetworkManagerInsightPack > Find events between two nodes on layer 2 topology

338 IBM Netcool Operations Insight: Integration Guide Event annotations The event annotations that are defined by the Network Manager Insight Pack index configuration file. These fields are the same as those in the Tivoli Netcool/OMNIbus Insight Pack. For more information, see “Netcool/OMNIbus Insight Pack” on page 311.

Configuring topology search apps for use with Oracle databases If you are using an Oracle database for topology storage (that is, as the NCIM database), obtain the Oracle driver and point to the driver in the custom apps. You can skip this task if you are using a Db2 database. You can make this configuration while the products and Insight Pack are still running.

Before you begin Install and configure the supported products, and install and configure the Network Manager Insight Pack

Procedure 1. Obtain and install the Oracle driver from the Oracle website. 2. On the Operations Analytics - Log Analysis host, save the driver to $UNITY_HOME/wlp/usr/ servers/Unity/apps/Unity.war/WEB-INF/lib. 3. Point the custom apps to the driver: a) Open the following files for editing: • $UNITY_HOME/AppFramework/Apps/NetworkManagerInsightPack_v1.3.0.0/ Network_Topology_Search/NM_Show_Alerts_Between_Two_Nodes_Layer2.sh • $UNITY_HOME/AppFramework/Apps/NetworkManagerInsightPack_v1.3.0.0/ Network_Topology_Search/NM_Show_Alerts_Between_Two_Nodes_Layer3.sh b) Insert the path to the Oracle driver into the classpath parameter. For example, change it from this classpath:

$UNITY_LIB_PATH/db2jcc.jar:$UNITY_LIB_PATH/log4j-1.2.16.jar

To this classpath:

To: $UNITY_LIB_PATH/db2jcc.jar:$UNITY_LIB_PATH/ojdbc14.jar: $UNITY_LIB_PATH/log4j-1.2.16.jar

4. In the $UNITY_HOME/AppFramework/Apps/NetworkManagerInsightPack_V1.3.0.0/ Network_Topology_Search/NM_EndToEndSearch.properties file, which point the Insight Pack to the NCIM database, ensure that the following properties for Oracle are correctly set. ncp.dla.datasource.type Enter oracle. ncp.dla.datasource.driver Enter the driver name, for example oracle.jdbc.driver.OracleDriver. ncp.dla.datasource.url Enter the URL, in the format jdbc:oracle:thin:@host:port:name, where name is the name of the database. For example, jdbc:oracle:thin:@192.168.1.2:1521:itnm

Customizing the index configuration Because the rule set, source type, and collection are provided in the OMNIbusInsightPack_v1.3.1, any customizations must be made to that Insight Pack, not the Network Manager Insight Pack. Refer to the OMNIbusInsightPack_v1.3.1 README for more information about customizing that Insight Pack. For more information, see “Netcool/OMNIbus Insight Pack” on page 311.

Configuring integration to IBM Connections IBM Tivoli Netcool/Impact provides integration to IBM Connections by using a Netcool/Impact IBMConnections action function. The IBMConnections action function allows users to query forums and topics lists, create a new forum, create a new topic, and update existing topics. The

Chapter 5. Configuring 339 IBMConnections action function package is available in the directory $IMPACT_HOME/integrations/ IBMConnections.

About this task Complete the following steps to configure Netcool/Impact to integrate with the IBM Connections Server.

Procedure 1. Go to the directory $IMPACT_HOME/add-ons/IBMConnections, this directory is the IBM Connections integration package. The package includes the following subdirectory: importData A project that includes policies, data source, data type, and a service. The project serves an example to show how to connect, create, update, and topics in IBM Connections 2. Import the project $IMPACT_HOME/bin/nci_ import __Extraction_Directory__/importData. 3. Within the $IMPACT_HOME/etc/_server.props file, add the following parameters: impact.ibmconnections.forum.title.maxsize= number. Default value is 0. Any string size can be used. impact.ibmconnections.forum.content.maxsize= number. Default value is 0. Any string size can be used impact.ibmconnections.topic.title.maxsize= number. Default value is 255, for a topic and a reply. impact.ibmconnections.topic.content.maxsize = number. Default value is 0. Any string size can be used 4. Restart Netcool/Impact servers. Related tasks Installing Netcool/OMNIbus and Netcool/Impact

IBM Connections Overview IBM Connections is a leading social software platform that can help your organization to engage the right people, accelerate innovation, and deliver results. This integrated, security-rich platform helps people engage with networks of experts in the context of critical business processes. Now everyone can act with confidence and anticipate and respond to emerging opportunities For information about IBM Connections, refer to http://www-03.ibm.com/software/products/en/conn

Parameters for the IBMConnections function The integration between Netcool/Impact and IBM Connections uses a new policy action function that is called the IBMConnections function. The IBMConnections function can be used within any policy to connect to the IBM Connections Server and to perform an action on the IBM Connections Server. The function accepts two input parameters, the Action Option parameter and the Impact Object parameter.

Action Option Parameter The Action Option parameter accepts one of the following action entries, which are case insensitive. Some action entries require property information that is case-sensitive. In the content of an IBM Connections forum, topic, or reply, you can use HTML formatting tags br, b, and a. For more information about the supported HTML tags, see http://www-03.ibm.com/software/ products/en/conn .

340 IBM Netcool Operations Insight: Integration Guide CREATEFORUM Creates a forum. Enter the following property information that is case-sensitive. The tags must be created before they pass to a variable name. props.ForumTitle=title; props.ForumContent=full text of the body; props.ForumTags=List_Of_Tags; Is optional, the object must be a Netcool/Impact object. Tags=NewObject(); Tags.Tag1=some tag; Tags.Tag2=some tag2; Is optional if want more than one tag. CREATETOPIC Creates a topic. Enter the following property information that is case-sensitive: props.TopicTitle=title; props.TopicContent=full text of the body; props.ForumId=forum id: Where the forum id is an ID and not a forum name. DELETEFORUM Deletes the forum name that was created by the logged in user and any topic or reply belonging to it. Enter the following property information that is case-sensitive: props.ForumId=forumId; Or props.ForumId=forumTitle; props.FirstMatchOnly=true; Or props.FirstMatchOnly=false; The props.FirstMatchOnly property deletes the first matching forum or matching topic that it finds, or else it deletes any matching forum or matching topic and its default value is true. DELETEPUBLICFORUM Deletes the given public forum name and any topic or reply belonging to it. Enter the following property information that is case-sensitive: props.ForumId=forumId; Or props.ForumId=forumTitle; props.FirstMatchOnly=true; Or props.FirstMatchOnly=false;The props.FirstMatchOnly property deletes the first matching forum or matching topic that it finds, or else it deletes any matching forum or matching topic and its default value is true.

DELETEREPLY Deletes a topic reply in the topic in the forum id. Enter the following property information that is case-sensitive: props.ForumId=forumId; Where the forumId is an ID and not a title. props.TopicTitle=title; Or props.TopicTitle=id; props.ReplyTitle=replytitle; Or props.ReplyTitle=replyid; props.FirstMatchOnly=true; Or props.FirstMatchOnly=false;The props.FirstMatchOnly property deletes the first matching forum or matching topic that it finds, or else it deletes any matching forum or matching topic and its default value is true. DELETETOPIC Deletes a topic in the forum id. Enter the following property information that is case-sensitive: props.ForumId=forumId; Where the forumId is an ID and not a title. props.TopicTitle=title; Or props.TopicTitle=id; props.FirstMatchOnly=true; Or props.FirstMatchOnly=false;The props.FirstMatchOnly property deletes the first matching forum or matching topic that it finds, or else it deletes any matching forum or matching topic and its default value is true.

Chapter 5. Configuring 341 GETCOMMUNITYFORUMID Gets the ID of the community that is created by the logged in user Enter the following property information that is case-sensitive: props.CommunityName=Community name; props.ForumName=forum name; GETFORUMTOPICS Gets list of topics for the forum id Enter the following property information that is case-sensitive: props.ForumId=forum id; GETMYCOMMUNITYID Gets the ID of the community for the logged in user Enter the following property information that is case-sensitive: props.CommunityName=Community name; GETMYFORUMID Gets the ID of the forum that is created by the logged in user. Enter the following property information that is case-sensitive: props.ForumName=actual forum name that is created by the logged in user GETMYFORUMS Gets all the forums for the logged in user GETPUBLICCOMMUNITYID Gets the ID of the given public community ID Enter the following property information that is case-sensitive: props.CommunityName=Community name; GETPUBLICFORUMID Gets the ID of the given public forum name Enter the following property information that is case-sensitive: props.ForumName=actual public forum name GETPUBLICFORUMS Gets list of all public forums GETTOPICREPLIES Gets list of replies for a topic Enter the following property information that is case-sensitive: props.TopicId=topic id; Where topic id must be the topic id not the topic name. REPLYTOTOPIC Creates a reply to an existing topic Enter the following property information that is case-sensitive: props.TopicId=topic id; Where topic id is a topic ID not a topic name props.ReplyTitle=title; props.ReplyContent=full text of the body;

Impact Object Parameter The Impact Object parameter accepts the following property information. The authentication, and connection property information is mandatory. props = NewObject(); props.Protocol=https;

342 IBM Netcool Operations Insight: Integration Guide props.Host=IBM Connections Server Host/IP; props.Port=_PORT_; props.Username=userName; props.Password=password; The password can be encrypted by using either the Netcool/Impact nci_crypt tool or the policy function Encrypt(). If the password is encrypted, you must use the property props.DecryptPassword=true;

IBMConnections Project and artifacts The imported IBMConnections project includes artifacts that are categorized into four categories.

Data sources • IBMConnectionsObjectServerDSA • Internal

Data types • TopicCreationTracker – An internal data type that is used to track the topic creation to avoid duplicate names. • InternetOutageEvents – ObjectServer data type that can be used to view the critical events in the UI Data Provider widgets. It populates the severity as status data type to show colorful images.

Policies • IBMConnectionsUtils – Includes a utility function to extract value from a data item. • IBMConnectionsUtilsCaller – Shows how to call the utility function in IBMConnectionsUtils. • IBMConnectionsUtilsJS – For JavaScript policies. • IBMConnectionsUtilsCallerJS – For JavaScript policies. • NetworkMonitorExample – Example policy that is run by an event reader to create and update topics. • NetworkMonitorForOpView – Example policy that is run by the operator view from the ObjectServer Event List tool. • Opview_IBMConnectionsOpView – Is run by the AEL tool.

Services • NetworkMonitorExample – Connects to the object server data source and uses a default filter of Node in (‘US’,’France’,’UK’)and Identifier Like 'Monitoring Network for'. You can change the default filter at any time. It runs the policy NetworkMonitorExample.

Chapter 5. Configuring 343 Automatic topic management The IBM Connections integration package includes NetworkMonitorExample event reader service that connects to the Netcool/OMNIbus Object Server and filters for specific events. When there is a match, the service runs the policy that either updates the topic by sending a reply to the topic, or creates a topic if a topic does not exist. The forum that is used in this example is the same name as the AlertKey in the Object Server event, forumName = @IBMConnections_Forum You can use the sql scripts in the $IMPACT_HOME/integrations/IBMConnections/db directory to create extra fields in the Object Server or you can use existing fields. The topic title is created as a combination of a hardcoded string and the node field from the event:

topicTitleVar="Network Monitor is down on node: @Node@" ; IBMConnectionsUtils.extractParametersAndSubstitute (topicTitleVar,EventContainer,result); topicTitle = result;

The policy checks if the topic was created by querying the internal data type TopicCreationTracker. If the topic exists, the policy sends a reply instead of creating a new one.

Automatic topic management with event management tools The operator view policy is updated to run the NetworkMonitorForOpView policy that automatically updates an existing topic or creates a new topic.

Procedure 1. Create the event management tool, refer to the Netcool/OMNIbus documentation. 2. In the event management tool, select the executable box tab and enter the following text: start "" "https://:/opview/displays/NCICLUSTER- IBMConnectionsOpView.html? Node=@Node&Serial=@Serial&Severity=@Severity&Acknowledged=@Acknowledged&Aler tKey=@AlertKey&AlertGroup=@AlertGroup&Summary=@Summary" NCICLUSTER Is the default cluster but if you are using a different cluster name then update the URL with your cluster name. The above text is an example of text to enter for an Object Server on Windows. 3. Add the new event management tool to the AlertsStatus tools menu, refer to the Netcool/OMNIbus documentation. 4. When you right-click on an event, click the tool to start the URL and run the policy. The operator view is started and gives a notification that the topic is created or updated, along with a link to the topic URL to use. Related information Creating event management tools

Enabling historical events Create a connection to the historical database in Impact to view historical events in the Event Viewer.

Procedure 1. Log in to Dashboard Application Services Hub and select the Data Model tab. 2. Click the New Data Source icon. 3. Point to Database SQL and select your database type. For example Db2. 4. In the Data Source Name field enter: historicalEventsDatasource. 5. Enter your Username and Password in the fields provided and click the Save icon. 6. In the left-hand navigation pane, right-click ImpactHistoricalEventData and select New Data Type. 7. In the Data Type Name field enter: historicalEventData.

344 IBM Netcool Operations Insight: Integration Guide 8. Click Refresh.

Chapter 5. Configuring 345 346 IBM Netcool Operations Insight: Integration Guide Chapter 6. Getting started with Netcool Operations Insight

After you have installed the product, log into the different components of Netcool Operations Insight.

Getting started with Netcool Operations Insight on the cloud Get started with Netcool Operations Insight on the cloud by logging into the Cloud GUI and the Advanced GUI.

Which GUI to log into Within the Cloud-based and hybrid Netcool Operations Insight systems, capabilities are available in both the Cloud GUI and the Advanced GUI. Use this information to understand which user GUI to log into to access the different Netcool Operations Insight capabilities.

Table 54. Which GUI to log into Log into... To access this capability...

Cloud GUI • Monitoring events • Configuring analytics • Administering analytics policies • Configuring lightweight integrations, to easily ingest data from a wide range of Cloud environments, such as Amazon Web Services, Data dog, and Dynatrace • Accessing topology • Accessing runbooks

The Cloud GUI also provides a launch-out capability to Netcool Web GUI, Netcool/Impact, and the topology management service.

Advanced GUI • Advanced event monitoring and configuration using Netcool/OMNIbus Web GUI • Administering Impact policies using Netcool/Impact • The topology management service • DASH capabilities

Logging into Netcool Operations Insight Use this information to understand how to access and log into the Cloud GUI and the Advanced GUI.

Before you begin If you are logging into the Cloud GUI in a hybrid environment, then be aware of the following prerequisites: • If you are unable to log into your production deployment, then ensure all your certificates are trusted. • If you are unable to log into a demo, proof of concept, or trial deployment, then first, log into the Advanced GUI using the URL constructed based on the instructions in this topic, and accept the certificate error. Then, log into the Cloud GUI.

© Copyright IBM Corp. 2020, 2020 347 About this task The following table provides the parameterized hostname and URL. Use this URL to log into the Cloud GUI.

GUI Default hostname Default URL

Cloud GUI netcool.custom_resource.f https:// qdn netcool.custom_resource.fqdn

The following table lists the Netcool Operations Insight on OpenShift components, together with parameterized hostnames and URLs. Use these URLs to log into the Advanced GUI and access these different components.

Component Default hostname Default URL

Web GUI netcool.custom_resource.fq https:// dn netcool.custom_resource.fqdn/ibm/ The topology console management service

Netcool/Impact GUI impact.custom_resource.fqd https:// n impact.custom_resource.fqdn/ibm/ console

WebSphere was.custom_resource.fqdn https://was.custom_resource.fqdn/ibm/ Application Server console

Netcool/Impact nci-0.custom_resource.fqdn https://nci-0.custom_resource.fqdn/ Primary nameserver/services

Netcool/Impact nci-1.custom_resource.fqdn https://nci-1.custom_resource.fqdn/ Backup nameserver/services

Note: In order to construct the URL for each GUI, you must first identify the hostname. This topic explains how to derive the information necessary to construct the hostname and the URL.

Procedure 1. Retrieve the hostname for each component of Netcool Operations Insight on Red Hat OpenShift with the following command:

kubectl get route -n namespace

Run the kubectl get ingress command to retrieve the hostname allocated by the Kubernetes Ingress controller to satisfy each ingress. This command retrieves data similar to the following:

custom_resource-ibm-hdm-analytics-dev-backend-ingress-4 netcool.custom_resource.fqdn custom_resource-ibm-hdm-analytics- dev-inferenceservice 8080 edge None custom_resource-ibm-hdm-common-ui netcool.custom_resource.fqdn custom_resource-ibm-hdm-common-ui- uiserver 8080 edge/Redirect None custom_resource-impactgui-xyz impact.custom_resource.fqdn custom_resource- impactgui 17311 passthrough/Redirect None custom_resource-nci-0 nci-0.custom_resource.fqdn custom_resource- nci-0 9080 edge/Redirect None custom_resource-proxy-xyz proxy2.custom_resource.fqdn custom_resource-

348 IBM Netcool Operations Insight: Integration Guide proxy 6002 passthrough/Redirect None custom_resource-webgui-3pi netcool.custom_resource.fqdn custom_resource- webgui 16311 reencrypt/Redirect None

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • fqdn is the fully qualified domain name (FQDN) of the cluster's master node. . 2. Construct the URL for each of the components of Netcool Operations Insight on OpenShift. You can then copy and paste this URL directly into your browser.

https://hostname/ibm/console

Where hostname is the value from the HOSTS column in step 1 for the component that you want to log into. The hostname is made up of three elements: • Component name; for example: impact. • Custom resource name: custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml / noi.ibm.com_nois_cr.yaml (CLI install).. • Cluster name: fqdn 3. Ensure that the hostname in your URL resolves to an IP address. You can do this using one of the following methods: • Configure your /etc/hosts file with the hostname and IP address from step 1. • Query a DNS server on your network. 4. Navigate to the login page for the relevant GUI, by typing the URL that you constructed in step 2 into your browser. 5. Type your login credentials, and click Login.

Accessing the main navigation menu Open the main navigation menu to access the capabilities available in the Cloud GUI.

About this task

Click the navigation icon at the top-left corner of the screen to go to the main navigation menu. The following table explains all of the menu icons and lists the capabilities available from each menu.

Icon and meaning Sub-menu Navigate here in order Primary More to ... audience information Display the Events Operators Click here Events page where you can monitor, investigate, and resolve events. Launch out to the Senior Click here Topology Advanced GUI in a operators separate browser tab and open the Topology Viewer.

Chapter 6. Getting started with Netcool Operations Insight 349 Icon and meaning Sub-menu Navigate here in order Primary More to ... audience information Runbooks Build and execute Administrators Click here Automations runbooks for your operations team. Policies List deployed analytics Click here policies, or, if you have chosen to run analytics in Review first mode, review and deploy analytics policies generated by analytics. Netcool/Impact Launch out to the Click here Advanced GUI in a separate browser tab and open Netcool/ Impact. Access your Impact policies here. Event reduction Monitor how effectively Click here Dashboards analytics is reducing the event count for your operations team. Topology dashboard Monitor the topologies Click here you and your team created using the topology management service. Integrations with other Set up incoming event Click here Administration systems data sources and API keys to bring event data into the system. Analytics configuration Switch analytics types Click here on and off and decide whether to leave temporal correlation to run in Deploy first mode or to switch to Review first. Switch back to the Operators Click here Netcool Web GUI classic Event Viewer, as an alternative to the new Events page.

Accessing the Getting started page Access the Getting started page at any time for quick links to help you learn more about Netcool Operations Insight, start using key Netcool Operations Insight features, and connect with communities and support centers for more information.

About this task To access the Getting started page, click the Netcool Operations Insight banner at the top right of the page at any time.

350 IBM Netcool Operations Insight: Integration Guide The Getting started page provides a series of quick links to key capabilities of Netcool Operations Insight, as well as links to associated documentation.

Getting started with Netcool Operations Insight on premises Log into the components of Operations Management, such as Web GUI and Operations Analytics - Log Analysis.

Getting started with Netcool Operations Insight Use this information to start of the components of Operations Management for Operations Insight and to log in using a Web browser.

About this task Tip: You can create start-up scripts to automatically to start the various Netcool Operations Insight products. For instructions and an example of how to configure a start-up script, see the 'The Netcool Process Agent and machine start-up' section in the Netcool/OMNIbusBest Practices Guide, which can be found on the Netcool/OMNIbus best-practice Wiki: http://ibm.biz/nco_bps You can configure Jazz not to prompt for user credentials when the stop command is run. After creating a backup, edit the following lines in the /opt/IBM/JazzSM/profile/properties/ soap.client.props file:

com.ibm.SOAP.securityEnabled=true com.ibm.SOAP.loginUserid=smadmin com.ibm.SOAP.loginPassword=netcool

Run the following command to encrypt the embedded password within the file:

/opt/IBM/JazzSM/profile/bin/PropFilePasswordEncoder.sh \ /opt/IBM/JazzSM/profile/properties/soap.client.props \ com.ibm.SOAP.loginPassword

For more information, see the following technote: http://www.ibm.com/support/docview.wss? uid=swg21584635

Procedure • Start the Dashboard Application Services Hub server using the /opt/IMB/JazzSM/profiles/bin/ startServer.sh server name command. • Log in at http://host.domain:16310/ibm/console or, for a secured environment, at https:// host.domain:16311/ibm/console, where host.domain is the fully qualified host name or IP address of the Jazz for Service Management application server. 16310 and 16311 are the default ports for HTTP and HTTPS respectively. • Assign roles to users. To give users access to the Event Analytics capability, assign the ncw_analytics_admin user.

What to do next Useful information about how to configure your environment is in the Netcool Operations Insight Example Guide, which you can download at https://../..//docs/noi/best-practices_noi/ . Related tasks Getting started with Networks for Operations Insight

Chapter 6. Getting started with Netcool Operations Insight 351 Getting started with Networks for Operations Insight After the installation and configuration steps are completed, you can start the components of the Networks for Operations Insight feature and log in to the host using a Web browser.

Procedure 1. Ensure that Tivoli Netcool/OMNIbus ObjectServer is running, see http://www-01.ibm.com/support/ knowledgecenter/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/admin/task/ omn_con_startingobjserv.html?lang=en . 2. Start the Dashboard Application Services Hub server using the /opt/IMB/JazzSM/profiles/bin/ startServer.sh server name command. 3. Source the environment variables. On the server where the Network Manager core components are installed, the script is installation_directory/netcool/core/env.sh. On the server where the Network Manager GUI components are installed, the script is installation_directory/ nmgui_profile.sh, for example, /opt/IBM/netcool/nmgui_profile.sh. 4. Start the back-end processes for the products. Use the itnm_start command to start Network Manager. Do not use this command to start the ObjectServer. The ObjectServer is started and stopped by the Tivoli Netcool/OMNIbus commands. Start Netcool Configuration Manager separately by using the itncm.sh start script. 5. Log in at http://host.domain:16310/ibm/console or, for a secured environment, at https:// host.domain:16311/ibm/console, where host.domain is the fully qualified host name or IP address of the Jazz for Service Management application server. 16310 and 16311 are the default ports for HTTP and HTTPS respectively. Use the supplied itnmadmin user with the password that you specified during the installation. 6. You can log in to the Netcool Configuration Manager - Base GUI at http://ncmserver- hostname:port-number, where ncmserver-hostname is the host name of the computer on which you installed Netcool Configuration Manager. port-number is the port that you specified during the installation. Use the default user name and password that you specified during the installation. Related tasks Starting Network Manager Logging in Launching Netcool Configuration Manager - Base Related reference itncm.sh script Related information Getting started with Network Manager Getting started with Netcool Configuration Manager

352 IBM Netcool Operations Insight: Integration Guide Chapter 7. Administering

Perform the following tasks to administer the solution.

About this task

Administering Cloud and hybrid systems Perform the following tasks to administer your Cloud or hybrid Netcool Operations Insight system.

Administering users User access to all of the Netcool Operations Insight on Red Hat OpenShift interfaces is provided based on default user group settings. You can also optionally create new users and groups.

About this task You can manage users using a built-in LDAP server (openLDAP server), or using your organization's LDAP server. The mechanism available to you was configured at installation time.

Single sign-on Single sign-on is preconfigured in Netcool Operations Insight on Red Hat OpenShift. Federated repositories to support authentication for single sign-on are configured by default. The following table lists the federated repositories, and list the default users and groups within these repositories. These default users are enabled by default.

Table 55. Federated repositories for single sign-on Repository Default groups Default users Capability

InternalFile None admin Repository

NetcoolObjectServe This repository links to root, ncoadmin, Access individual r default Netcool/ ncouser, nobody Netcool Operations OMNIbus groups Insight components only

ICP_LDAP icpadmins, icpusers icpadmin, icpuser, Perform launch-in impactadmin context actions across Netcool Operations Insight components. Needed to access Event Analytics functionality.

Users defined in any of these repositories can access individual Netcool Operations Insight components directly based on their role defined within the repository. For example, the ncoadmin user within the NetcoolObjectServer repository can log into Web GUI to perform tasks in the Event Viewer. However, any user that needs to perform launch in context actions across Netcool Operations Insight components must be defined be in the ICP_LDAP repository and in either one of the groups icpadmins or icpusers. Defining the user in the ICP_LDAP repository and in either one of the groups icpadmins or icpusers ensures that all of the relevant roles in Netcool/OMNIbus, Dashboard Application Services Hub, and Netcool/Impact are assigned to the user in order for launch in context actions to function properly.

© Copyright IBM Corp. 2020, 2020 353 Default users The following table describes users that are present after installation, along with their groups.

Users and their groups The following table describes users that are present after installation, along with their groups. Note: impactadmin and unityadmin will not exist by default if you installed with LDAP mode: proxy, and must be manually created. For more information, see “Creating users on an external LDAP server” on page 356.

Table 56. Users present after installation

User name Roles Group Password Description icpadmin Inherited from the group icpadmins netcool Sample administrator user for Operations Management on a container platform. icpuser Inherited from the group icpusers netcool Sample end user for Operations Management on a container platform. impactadmin Netcool/Impact-specific icpadmins netcool Administrator user roles: for Netcool/ Impact. • impactAdminUser • impactFullAccessUser • impactOpViewUser • impactMWMAdminUser • impactMWMUser • impactSelectedOpView User • impactUIDataProvider User • impactWebServiceUser • ConsoleUser • WriteAdmin • ReadAdmin • impactRBAUser

354 IBM Netcool Operations Insight: Integration Guide Table 56. Users present after installation (continued)

User name Roles Group Password Description admin Dashboard Application netcool The administrator Services Hub-specific roles: for Dashboard Application • iscadmins Services Hub. In a • chartAdministrator new installation, • samples this user has permissions to • administrator administer users, groups, roles, and pages.

Default Netcool/ See Netcool/OMNIbus V8.1.0 documentation: users OMNIbus users

Default Web GUI See Netcool/OMNIbus V8.1.0 documentation: Web GUI users and groups users

Default groups Use groups to organize users into units with common functional goals. Several groups are created at installation.

Default user groups The following groups are supplied with Netcool Operations Insight on Red Hat OpenShift. Users are assigned to these groups during installation. Note: icpadmins and icpusers will not exist by default if you installed with LDAP mode: proxy, and must be manually created. For more information, see “Creating users on an external LDAP server” on page 356.

Table 57. Default user groups

Name Description Roles associated with the group icpadmins Assign all Netcool Dashboard Application Services Hub-specific roles Operations Insight on Red chartViewer, suppressmonitor, monitor, configurator, iscadmins, Hat OpenShift chartAdministrator, ncw_user, samples, operator, administrators to this ncw_analytics_admin, chartCreator, ncw_gauges_editor, group so that they have ncw_admin, netcool_rw, ncw_dashboard_editor, administrator, administrative netcool_ro, ncw_gauges_viewer permissions over all of the Netcool Operations Netcool/Impact-specific roles Insight components. impactAdminUser, impactFullAccessUser, impactOpViewUser, impactMWMAdminUser, impactMWMUser, impactSelectedOpViewUser, impactUIDataProviderUser, impactWebServiceUser, ConsoleUser, WriteAdmin, ReadAdmin, impactRBAUser

Chapter 7. Administering 355 Table 57. Default user groups (continued)

Name Description Roles associated with the group icpusers Assign all Netcool Dashboard Application Services Hub-specific roles Operations Insight on Red chartViewer, monitor, configurator, ncw_user, samples, operator, Hat OpenShift users and netcool_rw,netcool_ro, ncw_gauges_viewer operators to this group so that they have Netcool/Impact-specific roles permissions to use the Netcool Operations impactOpViewUser, impactMWMAdminUser, impactMWMUser, Insight components. impactSelectedOpViewUser, impactUIDataProviderUser, impactWebServiceUser, ConsoleUser, WriteAdmin, ReadAdmin, impactRBAUser

Default See Netcool/OMNIbus V8.1.0 documentation: default groups Netcool/ OMNIbus groups and roles

Default Web See Netcool/OMNIbus V8.1.0 documentation: Web GUI users and groups GUI groups and roles

Creating users on an external LDAP server Certain LDAP entries must be created in the target LDAP server that is used by the Netcool Operations Insight on Red Hat OpenShift deployment if you installed with LDAP mode:proxy. If you installed with LDAP mode:standalone then these mandatory entries will already exist. If the required LDAP entries are missing, then some pods do not start correctly. There are other recommended LDAP entries whose absence would not cause a failure to deploy, but which improve the organization of entities in a deployment: Users, Groups, and Roles. Before deploying the offering, the LDAP server administrator must provide a base Distinguished Name (DN) value for the destination LDAP server. Additionally, the LDAP administrator must create the required LDAP entries at the base DN, and also review and optionally create the recommended LDAP entries. All LDAP entries are described in the following sections along with their DN and requirement. In all cases, the LDAP_SUFFIX placeholder must be replaced with the base DN value that is provided by the LDAP administrator.

Organizational Units

Unit name Distinguished Name Requirement groups ou=groups,LDAP_SUFFIX Required users ou=users,LDAP_SUFFIX Required

Example LDIF to create organizational units In all LDIF examples, LDAP_SUFFIX is replaced with dc=myldap,dc=org

dn: ou=groups,dc=myldap,dc=org objectClass: organizationalUnit objectClass: top ou: groups

dn: ou=users,dc=myldap,dc=org objectClass: organizationalUnit

356 IBM Netcool Operations Insight: Integration Guide objectClass: top ou: users

Users

User Name Distinguished Name Requirement icpadmin uid=icpadmin,ou=users,LDA Recommended P_SUFFIX icpuser uid=icpuser,ou=users,LDAP Recommended _SUFFIX impactadmin uid=impactadmin,ou=users, Required LDAP_SUFFIX unityadmin uid=unityadmin,ou=users,L Required DAP_SUFFIX

Example LDIF for creating users

dn: uid=icpuser,ou=users,dc=myldap,dc=org objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: ICP User uid: icpuser givenName: ICP User sn: icpuser userPassword:: password

dn: uid=icpadmin,ou=users,dc=myldap,dc=org objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: ICP Admin uid: icpadmin givenName: ICP Admin sn: icpadmin userPassword:: password

dn: uid=unityadmin,ou=users,dc=myldap,dc=org objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Unity Admin uid: unityadmin givenName: Unity sn: unityadmin userPassword:: password

dn: uid=impactadmin,ou=users,dc=myldap,dc=org objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Impact Admin User uid: impactadmin givenName: Impact Admin User sn: impactadmin userPassword:: password

Chapter 7. Administering 357 Groups

Group name Distinguished Names Members Requirement icpadmins cn=icpadmins,ou=gr icpadmin Recommended oups,LDAP_SUFFIX icpusers cn=icpusers,ou=gro icpadmin,icpuser Recommended ups,LDAP_SUFFIX unityadmins cn=unityadmins,ou= unityadmin Recommended groups,LDAP_SUFFIX impactadmins cn=impactadmins,ou impactadmin Recommended =groups,LDAP_SUFFI X

Example LDIF for creating groups

dn: cn=icpadmins,ou=groups,dc=myldap,dc=org cn: icpadmins owner: uid=icpadmin,ou=users,dc=myldap,dc=org description: ICP Admins group objectClass: groupOfNames member: uid=icpadmin,ou=users,dc=myldap,dc=org

dn: cn=icpusers,ou=groups,dc=myldap,dc=org cn: icpusers owner: uid=icpuser,ou=users,dc=myldap,dc=org description: ICP Users group objectClass: groupOfNames member: uid=icpuser,ou=users,dc=myldap,dc=org member: uid=icpadmin,ou=users,dc=myldap,dc=org

dn: cn=unityadmins,ou=groups,dc=myldap,dc=org cn: unityadmins owner: uid=unityadmin,ou=users,dc=myldap,dc=org description: Unity Admins group objectClass: groupOfNames member: uid=unityadmin,ou=users,dc=myldap,dc=org

Managing users with LDAP You can create users and perform other user management tasks by using LDAP if you selected the option to use the built-in LDAP server LDAP mode:standalone at installation time, or if you selected LDAP mode:proxy at installation time and the required users and groups exist in your external LDAP server. See “Creating users on an external LDAP server” on page 356

About this task The open source OpenLDAP server is installed as part of the Netcool Operations Insight on Red Hat OpenShift installation. Access this server to manage users and groups. Users and groups can also be managed through the WebSphere Application Server UI. For more information, see the OMNIbus Knowledge Center, https://www.ibm.com/support/knowledgecenter/en/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/ web_adm_createuserwebsphere.html

Creating a user and adding a user to a group Use LDAP to create a new user and add that user to an existing group. You can also add an existing user to an existing group.

Before you begin During the installation of Netcool Operations Insight you must communicate with the cluster from the command line, using the Kubernetes command line interface kubectl, You must configure the command

358 IBM Netcool Operations Insight: Integration Guide line on your terminal to communicate with the cluster using the Kubernetes command line interface kubectl.

Procedure 1. Run the following command to retrieve the identifier of the LDAP Proxy Server pod.

kubectl get pods | grep custom_resource-openldap-0

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install).. 2. Log in to the LDAP Proxy Server pod.

kubectl exec -it openldap_pod_id /bin/bash

Where openldap_pod_id is the identifier of the LDAP Proxy Server pod. Proceed as follows:

If you want to... Then... Create a new user and add it to a group Go to the next step Add an existing user to a group Go to step 3 3. Create the new user. a) Create an LDAP Data Interchange Format file to define the new user. For example:

vi newuser.ldif

b) Define the contents of the LDIF file that you created by using a format similar to this example:

dn: uid=icptester,ou=users,dc=mycluster,dc=icp objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: ICP Test User uid: icptester givenName: ICP Test User sn: icptester userPassword: password

Where: • uid is the user ID of the new user. For example, icptester. • dc is the domain components that were specified for the suffix and baseDN. By default the value of this parameter is dc=mycluster,dc=icp. • userPassword is the password for this user. All other attributes in the file can be defined as shown in the code sample. c) Run the following command to create the new user.

ldapadd -c -x -w LDAP_BIND_PWD -D LDAP_BIND_DN -f filename.ldif

Where: • LDAP_BIND_PWD is the password for the ldap_bind function, which asynchronously authenticates a client with the LDAP server. By default the value of this parameter is admin. • LDAP_BIND_DN is an object in LDAP that can carry a password. In the example, the value is:

”cn=admin,dc=mycluster,dc=icp”

Chapter 7. Administering 359 • filename is the name of the LDAP Data Interchange Format file that is defined in step 2b. In the example used there, filename is newuser. 4. Add the user to an existing group. a) Create an LDAP Data Interchange Format file to add the user to a group. For example:

vi addUsersToGroup.ldif

b) Define the contents of the file by using a format similar to the following:

dn: cn=icpadmins,ou=groups,dc=mycluster,dc=icp changetype: modify add: member member: uid=icptester,ou=users,dc=mycluster,dc=icp

c) Run the following command to add the user to a group.

ldapmodify -w LDAP_BIND_PWD -D LDAP_BIND_DN -f filename.ldif

Where: • LDAP_BIND_PWD is the password for the ldap_bind function, which asynchronously authenticates a client with the LDAP server. By default the value of this parameter is admin. • LDAP_BIND_DN is an object in LDAP that can carry a password. In the example, the value is:

”cn=admin,dc=mycluster,dc=icp”

• filename is the name of the LDAP Data Interchange Format file that is defined in step 2b. In the example used there, filename is newuser. 5. Check that the users and groups were added to LDAP by running the following command.

ldapsearch -x -LLL -H ldap:/// -b dc=mycluster,dc=icp

Removing a user from a group Use LDAP to remove a user from a group.

Procedure 1. Create an LDIF file, such as the one in the example that removes the user icptester from the group icpadmins.

$ cat remove-testuser-from-group.ldif dn: cn=icpadmins,ou=groups,dc=mycluster,dc=icp changetype: modify delete:member member: uid=icptester,ou=users,dc=mycluster,dc=icp

Where • uid is the user ID of the user to be removed from the group. • cn is the group that the user is to be removed from. • dc is the domain components that were specified for the suffix and baseDN. By default the value of this parameter is dc=mycluster,dc=icp. 2. Run ldapmodify with the LDIF file that you created.

$ ldapmodify -w mypassword -D 'cn=admin,dc=mycluster,dc=icp' -f remove-testuser-from- group.ldif

360 IBM Netcool Operations Insight: Integration Guide Deleting a user Delete a user by using LDAP.

Procedure Run the ldapdelete command to delete a user, as in the following example that deletes the user icptester.

ldapdelete -w mypassword -D 'cn=admin,dc=mycluster,dc=icp' 'uid=icptester,ou=users,dc=mycluster,dc=icp'

Where • uid is the user ID of the user to be removed from the group. • cn is the group that the user is to be removed from. • dc is the domain components that were specified for the suffix and baseDN. By default the value of this parameter is dc=mycluster,dc=icp. • mypassword is the password of the user to be deleted.

Loading data To learn about cloud native analytics, you can install data from your local system, migrate historical data from a reporter database, or use the sample data set, which is provided with the installation. Load data, train the system, and see the results.

Loading sample data: scenario for cloud native analytics To learn about cloud native analytics, you can install a sample data set. Learn how to install and load sample data, train the system, and see the results.

Before you begin Complete the following prerequisite items: • The ea-events-tooling container is installed by the operator. It is not started as a pod, and contains scripts to install data on the system, which can be run with the kubectl run command. • Find the values of image and image_tag for the ea-events-tooling container, from the output of the following command:

kubectl get noi release_name -o yaml | grep ea-events-tooling

Where release_name is the custom resource release name of your cloud deployment. For example, in the output below image is ea-events-tooling, and image_tag is 2.0.14-20200120143838GMT.

kubectl get noi myreleasename -o yaml | grep ea-events-tooling --env=CONTAINER_IMAGE=image-registry.openshift-image-registry.svc:5000/default/ea-events- tooling:2.0.14-20200120143838GMT \ --image=image-registry.openshift-image-registry.svc:5000/default/ea-events- tooling:2.0.14-20200120143838GMT \

For a hybrid deployment, run the following command:

kubectl get noihybrid release_name -o yaml | grep ea-events-tooling

Where release_name is the custom resource release name of your hybrid deployment. • If you created your own docker registry secret in Creating a registry secret to support namespace access, then patch your serviceaccount.

kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "noi-registry- secret"}]}'

Where noi-registry-secret is the name of the secret for accessing the Docker repository.

Chapter 7. Administering 361 Note: As an alternative to patching the default service account with image pull secrets, you can add the following option to each kubectl run command that you issue:

--overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "noi-registry-secret"}] } }'

About this task You can use scripts in the ea-events-tooling container to install sample data on the system. Run the loadSampleData.sh script to load data to the ingestion service, train it, create a scope-based policy and load the data into IBM Netcool Operations Insight. This script loads prebuilt data into the ingestion service and ObjectServer and trains the system for seasonality and related events. To access the secrets, which control access to the ObjectServer, Web GUI and policy administration, the loadSampleData.sh script needs to run as a job. Note: Loading pre-built data, training the system and loading data into the ObjectServer must be carried out only once. Thus, for the example procedure below, step 1.b must be invoked just once.

Procedure 1. Run the loadSampleData.sh script from a job so that it has access to the secrets. Complete the following steps: a. Use the -j option with the script to generate a YAML file, such as loadSampleJob.yaml in the following example:

kubectl delete pod ingesnoi3 kubectl run ingesnoi3 --restart=Never --env=LICENSE=accept --image-pull-policy=Always \ --env=CONTAINER_IMAGE=image:image_tag \ -i --image=image:image_tag \ loadSampleData.sh -- -r release_name -j > loadSampleJob.yaml

Where: • image is the location of the ea-events-tooling container, as described earlier. • CONTAINER_IMAGE is an environment variable, which is the same as the value you pass to the -- image parameter in the Kubernetes command. This variable allows the container to populate the image details in the YAML file output that it creates. • image_tag is the image version tag, as described earlier. • release_name is the custom resource release name of your deployment. b. Create a job using the generated YAML file, such as loadSampleJob.yaml in the following example:

kubectl create -f loadSampleJob.yaml -n

Where is the namespace in which your deployment is installed. Note: If the default service account does not have access to the image repository, uncomment the image pull secrets section in the loadSampleData.yaml file and set the imagePullSecrets.name parameter to your Docker secret name before running the kubectl create command. A job called -loadSampleData is created. You can view the job output with the pod logs created by the job. 2. View how the sample data has been grouped. a. Connect to Web GUI. b. Select Incident > Event Viewer. The list of all events are displayed. c. Select the Example_IBM_CloudAnalytics view to see how the events from the sample data are grouped. 3. Turn off auto-deploy mode to view the suggested policies before they are deployed.

362 IBM Netcool Operations Insight: Integration Guide The loadSampleData.sh script switches your configuration to auto-deploy mode. If you want to see the suggested policies for the sample data, and how you can manually enable them, then run the following steps to disable automatic policy deployment, and re-run a portion of the training. a) Get the start and end times for the sample data as in the following example.

kubectl delete pod ingesnoi3; kubectl run ingesnoi3 -i --restart=Never --env=LICENSE=accept --image=image:image_tag getTimeRange.sh samples/demoTrainingData.json.gz pod "ingesnoi3" deleted {"minfirstoccurence":{"epoc":1552023064603,"formatted":"2019-03-08T05:31:04.603Z"}, "maxlastoccurrence":{"epoc":1559729860924,"formatted":"2019-06-05T10:17:40.924Z"}}

b) Run the script with the -d parameter set to false to disable auto-deployment.

kubectl delete pod ingesnoi3; kubectl run ingesnoi3 -i --restart=Never --env=LICENSE=accept --image-pull-policy=Always --image=image:image_tag runTraining.sh -- -r test-install -a related_events -s 1552023064603 -e 1559729860924 -d false

Where: • image is the location of the ea-events-tooling container, as described earlier. • image_tag is the image version tag, as described earlier. • -s and -e are the start and end times that are returned in step 3 a.

What to do next When you have finished using the sample data scenario and you want to train with real event data, you will need to re-run training. See “Training with real event data” on page 368

Loading local data To learn about cloud native analytics, you can install a local data set. Learn how to install and load your local data, train the system, and see the results.

Before you begin Before you complete these steps, complete the following prerequisite items: • The ea-events-tooling container is installed by the operator. It is not started as a pod, and contains scripts to install data on the system, which can be run with the kubectl run command. • Find the values of image and image_tag for the ea-events-tooling container, from the output of the following command:

kubectl get noi release_name -o yaml | grep ea-events-tooling

Where release_name is the custom resource release name of your cloud deployment. For example, in the output below image is ea-events-tooling, and image_tag is 2.0.14-20200120143838GMT.

kubectl get noi myreleasename -o yaml | grep ea-events-tooling --env=CONTAINER_IMAGE=image-registry.openshift-image-registry.svc:5000/default/ea-events- tooling:2.0.14-20200120143838GMT \ --image=image-registry.openshift-image-registry.svc:5000/default/ea-events- tooling:2.0.14-20200120143838GMT \

For a hybrid deployment, run the following command:

kubectl get noihybrid release_name -o yaml | grep ea-events-tooling

Where release_name is the custom resource release name of your hybrid deployment. • Determine the HTTP username and password from the secret that has systemauth in the name, by running the following command:

kubectl get secret release_name-systemauth-secret -o yaml

Chapter 7. Administering 363 • If you created your own docker registry secret in Creating a registry secret to support namespace access, then patch your serviceaccount.

kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "noi-registry- secret"}]}'

Where noi-registry-secret is the name of the secret for accessing the Docker repository. Note: As an alternative to patching the default service account with image pull secrets, you can add the following option to each kubectl run command that you issue:

--overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "noi-registry-secret"}] } }'

About this task You can use scripts in the ea-events-tooling container to install local data on the system. To complete this task, run the filetoingestionservice.sh, getTimeRange.sh, runTraining.sh, createPolicy.sh, and filetonoi.sh scripts. The scripts load local data to the ingestion service, train the system for seasonality and temporal events, create live seasonal policies and suggest temporal policies, and load the data into cloud native analytics.

Procedure 1. Send local data to the ingestion service. Run the filetoingestionservice.sh script:

export RELEASE=release_name export HTTP_PASSWORD=$(kubectl get secret $RELEASE-systemauth-secret -o jsonpath --template '{.data.password}' | base64 --decode) export HTTP_USERNAME=$(kubectl get secret $RELEASE-systemauth-secret -o jsonpath --template '{.data.username}' | base64 --decode)

cat mydata.json.gz | kubectl run ingesthttp -i --restart=Never --image=image:image_tag -- env=INPUT_FILE_NAME=stdin --env=LICENSE=accept --env=HTTP_USERNAME=$HTTP_USERNAME -- env=HTTP_PASSWORD=$HTTP_PASSWORD filetoingestionservice.sh $RELEASE

Where: • release_name is the custom resource release name of your deployment. • mydata.json.gz is the path to your local compressed data file. • image is the location of the ea-events-tooling container. The image value can be found from the kubectl get noi command, as described earlier. • image_tag is the image version tag, as described earlier. • You can override the username and password by using HTTP_USERNAME and HTTP_PASSWORD. Note: If you specify the --env=INPUT_FILE_NAME=stdin parameter, you can send your local data to the scripts by using the -i option with the kubectl run command. This option links the stdin parameter on the target pod to the stdout parameter. 2. Use the getTimeRange.sh script to calculate the training time range. If no time range is specified, the trainer trains against all rows for the tenant ID. Instead of using all data associated with the tenant ID to train the system, run the following command to find the start and end time stamps of the data:

cat mydata_json.gz | kubectl run ingesnoi3 -i --restart=Never --env=LICENSE=accept --image- pull-policy=Always --image=image:image_tag getTimeRange.sh stdin

Output similar to the following example is displayed:

{"minfirstoccurence":{"epoc":1540962968226,"formatted":"2018-10-31T05:16:08.226Z"}, "maxlastoccurrence":{"epoc":1548669553896,"formatted":"2019-01-28T09:59:13.896Z"}}

3. Train the system with the new data. Run the runTraining.sh script:

kubectl run trainer -it --command=true --restart=Never --env=LICENSE=accept

364 IBM Netcool Operations Insight: Integration Guide --image=image:image_tag runTraining.sh -- -r release_name [-t tenantid] [-a algorithm] [-s start-time] [-e end-time] [-d auto-deploy]

Where: • release_name is the custom resource release name of your deployment. • image is the location of the ea-events-tooling container. The image value can be found from the kubectl get noi command, as described earlier. • image_tag is the image version tag, as described earlier. • algorithm is either related-events or seasonal-events. If not specified, defaults to related- events. • Optional: tenantid is the tenant ID associated with the data that is ingested, as specified by the global.common.eventanalytics.tenantId parameter in the values.yaml file that is associated with the operator. • Optional: start-time and end-time are the start and end times to train against. These values are provided in the command output from step “2” on page 364. You can specify the start or end time, neither, or both. If neither are specified, the current time is used as the end time and the start time is 93 days before the end time. You can either specify the start and end times with an integer Epoch time format in milliseconds, or with the default date string formatting for the system. Run the ./ runTraining.sh -h command to determine the default date formatting. • Optional: auto-deploy Set to true to deploy policies immediately. Set to false to review policies before deployment. 4. Create a policy. A policy can be created through the UI, or you can specify a policy by running the createPolicy.sh script:

export ADMIN_PASSWORD=$(kubectl get secret release_name-systemauth-secret -o jsonpath --template '{.data.password}' | base64 --decode) kubectl run createpolicy --restart=Never --image=image:image_tag --env=LICENSE=accept --env=ADMIN_PASSWORD=${ADMIN_PASSWORD} createPolicy.sh -- -r release_name

Where: • release_name is the custom resource release name of your deployment. • image is the location of the ea-events-tooling container. The image value can be found from the kubectl get noi command, as described earlier. • image_tag is the image version tag, as described earlier. Note: This step creates a policy that maps the node to resource/name by default. If you want to map to resource/hostname or resource/ipaddress instead, specify the -- env=CONFIGURATION_PROPERTIES=resource/hostname|ipaddress parameter. 5. Send local data to the ObjectServer with the filetonoi.sh script, a) Find the OMNIbus password, if this is not known, with the following command:

export OMNIBUS_ROOT_PASSWORD=$(kubectl get secret omni-secret -o jsonpath --template '{.data.OMNIBUS_ROOT_PASSWORD}' | base64 --decode)

Where omni-secret is the name of the OMNIbus secret as specified by global.omnisecretname in your installation parameters, usually release_name-omni-secret, where release_name is the custom resource release name of your deployment. b) Create the ingesnoi pod. Use this command if you have a hybrid deployment (on-premises Netcool Operations Insight with cloud native analytics on a container platform):

kubectl run ingesnoi -i --restart=Never --image=image:image_tag --env=INPUT_FILE_NAME=stdin --env=LICENSE=accept --env=EVENT_RATE_ENABLED=false -- env=EVENT_FILTER_ENABLED=true --env=EVENT_REPLAY_REGULATE=true --env=EVENT_REPLAY_SPEED=60 -- env=EVENT_REPLAY_ENACT_DELETIONS=false --env=JDBC_PASSWORD=

Chapter 7. Administering 365 filetonoi.sh --

Use this command if you have a full Operations Management deployment on a container platform:

kubectl run ingesnoi -i --restart=Never --image=image:image_tag --env=INPUT_FILE_NAME=stdin --env=LICENSE=accept --env=EVENT_RATE_ENABLED=false -- env=EVENT_FILTER_ENABLED=true --env=EVENT_REPLAY_REGULATE=true --env=EVENT_REPLAY_SPEED=60 -- env=EVENT_REPLAY_ENACT_DELETIONS=false --env=JDBC_PASSWORD= filetonoi.sh --

Where: • image is the location of the ea-events-tooling container. The image value can be found from the kubectl get noi command, as described earlier. • image_tag is the image version tag, as described earlier. • ospassword is the on-premises ObjectServer password • oshostname is the on-premises ObjectServer host (on hybrid installs only) • osport is the on-premises ObjectServer port (on hybrid installs only) You can specify a user ID and password by using JDBC_USERNAME and JDBC_PASSWORD. These parameters correspond to the user ID and password of the ObjectServer. Note: You can view the available overrides and their default values by using the --help command option. c) Copy the file to be replayed into the ingesnoi pod.

kubectl cp mydata.json.gz ingesnoi:/tmp

d) Exec into the ingesnoi pod, and then replay the file into OMNIbus.

kubectl exec -it ingesnoi -- /bin/bash cd bin; cat /tmp/mydata.json.gz | ./filetonoi.sh

Where is the release name of your deployment. If you are unable to replay some events, include the following parameters:

--env=INPUT_REPORTERDATA=false --env=EVENT_REPLAY_TEMPORALITY_PRIMARY_TIMING_FIELD=LastOccurrence --env=EVENT_REPLAY_PREEMPTIVE_DELETIONS=false --env=EVENT_REPLAY_SKIP_DELETION=false --env=INPUT_TAG_TOOL_GENERATED_EVENTS=false

If you encounter more data integrity issues, include the following parameters:

--env=EVENT_REPLAY_STRICT_DELETIONS=false

6. View the data. a. Connect to Web GUI. b. Select Incident > Events > Event Viewer. Select the All Events filter. The list of all events is displayed. c. Select the Example_IBM_CloudAnalytics view to see how the events from the local data are grouped.

Migrating historical data from a reporter database: scenario for cloud native analytics To learn about cloud native analytics, you can install a historical data set. Learn how to install and load historical data from a reporter database and train the system.

Before you begin Before you complete these steps, complete the following prerequisite items:

366 IBM Netcool Operations Insight: Integration Guide • The ea-events-tooling container is installed by the operator. It is not started as a pod, and contains scripts to install data on the system, which can be run with the kubectl run command. • Find the values of image and image_tag for the ea-events-tooling container, from the output of the following command:

kubectl get noi release_name -o yaml | grep ea-events-tooling

Where release_name is the custom resource release name of your deployment. For example, in the output below image is ea-events-tooling, and image_tag is 2.0.14-20200120143838GMT.

kubectl get noi myreleasename -o yaml | grep ea-events-tooling --env=CONTAINER_IMAGE=image-registry.openshift-image-registry.svc:5000/default/ea-events- tooling:2.0.14-20200120143838GMT \ --image=image-registry.openshift-image-registry.svc:5000/default/ea-events- tooling:2.0.14-20200120143838GMT \

For a hybrid deployment, run the following command:

kubectl get noihybrid release_name -o yaml | grep ea-events-tooling

Where release_name is the custom resource release name of your hybrid deployment. • If you created your own docker registry secret in Creating a registry secret to support namespace access, then patch your service account.

kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "noi-registry- secret"}]}'

Where noi-registry-secret is the name of the secret for accessing the Docker repository. Note: As an alternative to patching the default service account with image pull secrets, you can add the following option to each kubectl run command that you issue:

--overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "noi-registry-secret"}] } }'

About this task You can use a script in the ea-events-tooling container to install historical data on the system. Run the jdbctoingestionservice.sh script to load historical data to the ingestion service. This script loads historical data from your reporter database into the ingestion service. The script runs inside the Kubernetes cluster which must have access to the JDBC port on your reporter database server.

Procedure 1. Run the jdbctoingestionservice.sh script:

export RELEASE= export HTTP_PASSWORD=$(kubectl get secret $RELEASE-systemauth-secret -o jsonpath --template '{.data.password}' | base64 --decode) export WAS_PASSWORD=$(kubectl get secret $RELEASE-was-secret -o jsonpath --template '{.data.WAS_PASSWORD}' | base64 --decode) kubectl run ingesthttp -i --restart=Never --image= /ea-events-tooling: --image-pull-policy=Always --env=INPUT_JDBC_HOSTNAME= --env=INPUT_JDBC_PORT= --env=INPUT_JDBC_USERID=$JDBC_USER --env=INPUT_JDBC_PASSWORD=$JDBC_PASSWORD --env=LICENSE=accept --env=HTTP_PASSWORD= $HTTP_PASSWORD jdbctoingestionservice.sh -- -r $RELEASE -s "start-time" -e "end-time"

Where: • $RELEASE is the release of the operator and corresponds to the NAME field. • is the location of the ea-events-tooling container, as described earlier. • is the image version tag, as described earlier.

Chapter 7. Administering 367 • and are the host name and port of the server that runs the Db2 instance and hosts the reporter database. • $JDBC_USER and $JDBC_PASSWORD are the user ID and password of a user with read access to the reporter database. • HTTP_PASSWORD can be found with export HTTP_PASSWORD=$(kubectl get secret $HELM_RELEASE-systemauth-secret -o jsonpath -- template '{.data.password}' | base64 --decode) • Optional: start-time and end-time are the start and end times to train against. You can specify the start or end time, neither, or both. If neither are specified, the current time is used as the end time and the start time is 93 days before the end time. You can either specify the start and end times with an integer Epoch time format in milliseconds, or with the default date string formatting for the system. Run the ./runTraining.sh -h command to determine the default date formatting. 2. Train the system with the new data. Run the runTraining.sh script:

kubectl run trainer -it --command=true --restart=Never --env=LICENSE=accept --image=/ea-events-tooling: runTraining.sh -- -r $RELEASE [-t tenantid] [-a algorithm] [-s start-time] [-e end-time]

Where: • algorithm is either related-events or seasonal-events. If not specified, defaults to related- events. • tenantid is the tenant ID associated with the data that is ingested, as specified by the global.common.eventanalytics.tenantId parameter in the values.yaml file that is associated with the operator. • start-time and end-time are the start and end times to train against. Specify the same values that you passed to the jdbctoingestionservice.sh in step “1” on page 367.

Training with real event data If you ran scenarios to load sample data, local data, or migrated data, then you must reset cloud native analytics if you want it to train against real events from a defined time period, and to suggest and deploy policies.

About this task Once you have finished working with the sample data, local data, or migrated data scenarios, run this procedure to start processing live event data. Note: Policies and groups that were created for the sample data, local data, and migrated data scenarios are removed.

Procedure Run the runTraining.sh script.

kubectl run trainer -it --command=true --restart=Never --env=LICENSE=accept --image=image:image_tag runTraining.sh -- -r release_name [-t tenantid] [-a algorithm] [-s start-time] [-e end-time] [-d auto-deploy]

Where: • release_name is the custom resource release name of your deployment. • image is the location of the ea-events-tooling container. The image value can be found from the kubectl get noi command, as described earlier. • image_tag is the image version tag, as described earlier. • algorithm is either related-events or seasonal-events. If not specified, defaults to related- events.

368 IBM Netcool Operations Insight: Integration Guide • Optional: tenantid is the tenant ID associated with the data that is processed, as specified by the global.common.eventanalytics.tenantId parameter in the values.yaml file that is associated with the operator. • Optional: start-time and end-time are the start and end times to train against. You can specify the start or end time, neither, or both. If neither are specified, the current time is used as the end time and the start time is 93 days before the end time. You can either specify the start and end times with an integer Epoch time format in milliseconds, or with the default date string formatting for the system. Run ./ runTraining.sh -h to determine the default date formatting. • Optional: auto-deploy Set to true to deploy policies immediately. Set to false to review policies before deployment. Running sample data or local data scenarios turns on the auto-deployment of policies, even if you have installed with temporalGroupingDeployFirst set to false.

Administering the Events page As a senior operator or an administrator, you need to perform administration tasks as part of system maintenance and to ensure the correct operation of the Events page.

About this task

All Events page administration tasks are performed by performing the corresponding tasks on the Netcool Web GUI. For more information, see the related link at the bottom of this topic.

Administering topology As a senior operator or administrator, you need to administer the topology management system, to ensure that the topology is kept up to date. Administration tasks include managing observer jobs to ingest the latest topology updates, and creating and maintaining topology templates to generate defined topologies.

About this task

To access topology management administration tasks click Topology on the main navigation menu of the Cloud GUI. For more information, see the related topic at the bottom of this page.

Administering policies Use this information to understand how to administer user-created policies and policies created by analytics.

Administering policies created by analytics Use this information to understand how to manage policies that have been created by analytics.

About analytics-created policies Policies take action against incoming events. You can select between three different tabs on the Policies GUI. The Created by analytics tab lists all of the policies that are created and deployed by the various Netcool Operations Insight analytics algorithms. The Suggested policies tab displays policies that are suggested by analytics. Administrators can activate suggested policies to act on incoming events and reject unwanted policies. Policies that have been rejected by administrators or senior operators are shown in the Rejected tab. Rejected policies don't act on incoming events.

Chapter 7. Administering 369 Figure 11. Policies GUI

Table 58. Policies table columns Column name Description Policy name A global unique identifier string to identify the policy. To customize a policy name, click the menu overflow icon and select Rename. Locked Indicates whether the policy is locked or unlocked . Locked policies continue to act on incoming events. However, the analytics algorithm cannot update a locked policy. CAUTION: Once a policy has been locked it cannot be unlocked, even by an administrator. The unlock action on a policy will mark it as unlocked in this GUI, and in the Policies GUI, but the policy continues to be locked.

370 IBM Netcool Operations Insight: Integration Guide Table 58. Policies table columns (continued) Column name Description Created by Identifies which of the following Netcool Operations Insight analytics algorithm created the policy: Related events Policies created by the related events algorithm groups events that are historically related. The related events function deploys chosen correlation rules, which are derived from related events configurations. Seasonality Seasonality policies identify individual alerts that tend to occur at a certain time. Scope Groups events together based on an operator defined scope. Topological correlation Groups events that occur on resources within a pre-defined section of your topology. Topological enrichment Enriches those alerts that occur on resources that are located somewhere within the topology. Temporal patterns Temporal correlation identifies groups of events that tend to occur together within your monitored environment. Self-Monitoring A self-monitoring policy can be enabled to provide assurance that Cloud Native Analytics is processing events. This policy is disabled by default.

Last updated by Displays the last user or algorithm to update the policy and a timestamp of the modification. Ranking Analytics policies are automatically ordered in the table based on a predefined ranking that is calculated by using the metrics of the policies. Policy metrics include criteria such as the maximum severity of the event or group and how recently the event or group occurred. The size of the group and the number of times a group or event occurs are also metrics that are used to rank policies. Hover over the ranking indicator to display the ranking metrics that are applied to that policy. Max severity The maximum severity of events within the policy when it was found. By default, there are six severity levels, each indicated by a different colored icon in the event list. The highest severity level is Critical and the lowest severity level is Clear, as shown in the following list:

Critical

Major

Minor

Warning

Indeterminate

Clear

Chapter 7. Administering 371 Table 58. Policies table columns (continued) Column name Description Event count Shows the number of events that the policy captures. Occurrences In the Created by analytics and Rejected tabs, this column indicates the number of occurrences that are observed in the historical data when the policy was activated. In the Suggested tab, it is the number of occurrences of the policy in the historical data. Actions Indicates the action that a policy is taking against incoming events. For example,

Correlate groups a set of events together and Enrich updates the fields in a specific event. Comment Text to describe the reasons for activating or rejecting a policy. The comment is saved together with the activated or rejected policy, providing you with an audit trail. Comments can be made against related events and temporal patterns policies. Status Indicates whether a policy is Enabled or Disabled . With the exception of seasonality policies, you can click the toggle button to change a policy status. Disabled policies do not take any action against incoming events.

Accessing policies Manage policies by accessing the Policies GUI.

Procedure

1. Click the navigation icon at the top-left corner of the screen to go to the main navigation menu.

2. In the main navigation menu, select Automations and click Policies.

Filtering policies You can filter the list of policies based on the policy type, access and status.

Procedure

1. In the main navigation menu, select Automations and click Policies.

2. Click Filter . 3. Select from the following filters: • Policy type – Temporal Grouping – Temporal Patterns – Scope – Seasonality – Topological Correlation – Topological Enrichment – Self-Monitoring • Locked – Yes – No

372 IBM Netcool Operations Insight: Integration Guide • Status – Enabled – Disabled 4. Click Apply filters.

Activating policies If your system is running in Review first mode, then you must manually activate policies to take action against incoming events.

About this task With temporal grouping, you can choose a Deploy first or Review first policy deployment mode. In Deploy first mode, policies are enabled automatically, without the need for manual review. In Review first mode, policies are not enabled until they are manually reviewed and approved. The default mode is Deploy first. When Cloud Native Analytics temporal correlation is turned on and the policy deployment mode is Review first, suggested temporal patterns appear in the Suggested policies tab. Restriction: Only Temporal Grouping policies can be activated from the Suggested policies tab.

Procedure

1. In the main navigation menu, select Automations and click Policies. 2. Select the Suggested policies tab.

3. In the table row of the policy that you want to activate, click the menu overflow icon , and select Activate. The policy is activated and moved from the Suggested policies tab to the Created by analytics tab. Related concepts “About analytics” on page 205 Read this document to find out more about analytics, including temporal correlation, seasonality, and probable cause.

Refreshing the policies table You can refresh the policies table to view all the latest policies at the current point in time. The policies table does not automatically refresh.

Procedure

1. In the main navigation menu, select Automations and click Policies. 2. If any filters are applied to the table, remove them first to view a complete list of policies.

3. Click Refresh .

A refreshed policy count is displayed beside the Filter and Refresh icons.

Managing multiple policies You can batch select policies and apply actions to all of your selections at one time.

About this task This feature applies only to related events and temporal pattern policies. The checkboxes for all other policy types are grayed out and cannot be selected.

Chapter 7. Administering 373 Procedure

1. In the main navigation menu, select Automations and click Policies. 2. Select multiple policies from the table by using the checkboxes at the beginning of the table rows. Tip: You can select all of the eligible policies in the table by clicking the checkbox in the table header beside Policy name. 3. A banner above the table displays the number of policies that are selected, together with a menu of options. You can apply actions to all selected policies at the same time. Choose from the following options: • Enable • Disable • Lock • Unlock • Reject

Renaming policies A global unique identifier string is used to identify each policy. You can rename policies to give them more meaningful names.

About this task Complete the following steps to rename a policy.

Procedure

1. In the main navigation menu, select Automations and click Policies.

2. In the table row of the policy that you want to rename, click the menu overflow icon , and select Rename. 3. Enter a new name in the field that is provided and click Rename.

Rejecting policies You can reject policies that you do not believe are valid. Rejected policies don't act on incoming events.

Procedure

1. In the main navigation menu, select Automations and click Policies.

2. In the table row of the policy that you want to reject, click the menu overflow icon , and select Reject. 3. When you reject a policy, you must add a reason in the comment field provided. The reason for the rejection is saved together with the rejected policy, providing you with an audit trail. 4. Click Reject. The policy is moved to the Rejected tab.

Administering Netcool/Impact policies You can manage Netcool/Impact policies directly from the Cloud GUI main navigation menu.

About this task

Procedure

1. Click the navigation icon at the top-left corner of the screen to go to the main navigation menu. 2. In the main navigation menu, click Automations > Netcool Impact

374 IBM Netcool Operations Insight: Integration Guide The Netcool/Impact GUI is displayed in a separate tab. For more information on managing Netcool/ Impact policies, see the related link at the bottom of the topic.

Managing runbooks and automations Netcool Operations Insight provides the capability to create and manage runbooks, which your operations teams can execute to solve common operational problems. Runbooks provide full and partial automation of commons operations procedures, thereby increasing the efficiency of your operations processes.

About runbooks Prior to creating runbooks, learn more about how runbooks work.

What is IBM Runbook Automation? Use IBM Runbook Automation to build and execute runbooks that can help IT staff to solve common operational problems. IBM Runbook Automation can automate procedures that do not require human interaction, thereby increasing the efficiency of IT operations processes. Operators can spend more time innovating and are freed from performing time-consuming manual tasks. Which problem does IBM Runbook Automation address? IT systems are growing. The number of events are increasing, and the pressure to move from finding problems to fixing them is increasing. IBM Runbook Automation supports operational and expert teams in developing consistent and reliable procedures for daily operational tasks. How can I simplify daily operational tasks? Using IBM Runbook Automation, you can record standard manual activities, so that they are run consistently across the organization. The next step is to replace manual steps with script-based tasks. What is a runbook? A runbook is a controlled set of automated and manual steps that support system and network operational processes. A runbook orchestrates all types of infrastructure elements, like applications, network components, or servers. IBM Runbook Automation helps users to define, build, orchestrate, and manage Runbooks and provides metrics about runbooks. A knowledge base is built up over time and the collaboration is fostered between Subject Matter Experts (SMEs) and operational staff. Video: Why IBM Runbook Automation Makes IT Ops Teams Efficient

Lifecycle of a runbook Runbooks start as documented procedures on a piece of paper that can become fully automated procedures. Find out how can you move your documented procedures to fully automated runbooks.

Figure 12. Moving from manual to fully automated runbooks

The following steps outline the process of moving from documented procedures to fully automated runbooks: 1. Transfer your documented procedures into a runbook. For more information, see “Create a runbook” on page 378. 2. Assess the runbook and gather feedback. You can get information about the quality and the performance of the runbook by analyzing available metrics. For example, ratings, comments, success rate, and more. For more information, see Improving runbook execution. 3. Create improved versions based on the feedback. With each new version, improve the success rate.

Chapter 7. Administering 375 For more information, see “Runbook versions” on page 389. 4. Investigate which runbooks are suitable for automations. What steps can be put into an automated procedure? Not every runbook moves from a manual to a fully automated runbook. Carefully consider which steps you want to automate, depending on the comments received, the effort it takes to automate, the frequency the runbook is used, and the potential impact automated steps could have. For more information, see “Creating an automation” on page 392. 5. Provide a new version with automated steps and see how it runs. For more information, see “Managing automations” on page 399. . 6. Continue to provide more automations until all steps are automated. Run runbooks that are started by a trigger. For more information, see “Create a trigger” on page 401.

Library View the available runbooks, find matching runbooks for the events that you need to resolve, and run the runbooks. You can also review the runbooks that you have used to date.

Run your first runbook Note: If you are using IBM Runbook Automation for the first time, it is recommended to load example runbooks and then edit and publish a runbook. For more information, see “Load and reset example runbooks” on page 378 and “Edit a runbook” on page 379. 1. Click Library.

2. Select a runbook and click Preview to find out what a runbook consists of: Details The details section provides descriptive details of the runbook. You can collapse and expand this section. By default, the section is collapsed. Expand the section in order to see the following fields: • Description: shows a description of the runbook. • Runbook ID: ID of the runbook which can be used to identify the runbook using the Runbook Automation HTTP REST Interface. • Frequency: shows the average frequency of running the runbook, for example twice a day. • Rating: shows the average rating that is provided by the user of the runbook. 1 star is the lowest and 5 the highest. • Success rate: indicates how often the runbook completed successfully. • Parameter: Parameter values put runbooks into the context of a particular problem - both in manual and automated steps. Procedure The Procedure section displays the runbook steps. For more information about the runbook elements, see “Edit a runbook” on page 379. Close the Preview. Open more example runbooks in preview mode to find out how runbooks are described.

3. Select one of the runbooks and click Run.

4. The Run runbook page opens, which consists of a sidebar on the left side of the window and a procedure description on the right side of the window. The sidebar can be collapsed and expanded using the Details icon beside the Runbook title. The sidebar contains the following collapsible sections:

376 IBM Netcool Operations Insight: Integration Guide • parameters – displays a list of the parameters used. • commands – shows the number of used commands and a description of how to use the commands. • goto – shows the number of goto's used within the runbook and a description of how to use the goto elements. • automations – displays a list of the automations used. • Details – displays details such as description, Runbook ID, frequency, rating, and success rate.

5. The parameters section on the left of the window is expanded and you are requested to enter parameter values. Click Read more to see a description and a reference to where this parameter is used within the runbook. Enter the parameter values, for example host1 for the parameter HOSTNAME and click Apply and Run. Note: • In some cases, you might not know the parameter value when starting the Runbook. For example, the parameter could be determined by a runbook step during the runbook execution. In this scenario, as soon as the parameter value is known, go back to the parameter section and enter the parameter value. To apply the value click Update. The Update button is only displayed if the parameter value has been entered. • In any step of a runbook, you can select the parameter section of the runbook in the sidebar on the left of the window. You might need to first expand the section if it is collapsed. The parameter can then be changed to a new value. To apply the new value click Update (the Update button is only displayed if the parameter value has been changed). As soon as a parameter has been used by a Runbook step, it cannot be changed.

6. Follow the instructions in the runbook and proceed to the next step by clicking Next step.

7. If the step contains a command, use the Copy button within a command to copy the command to your clipboard. Open a command-line window and paste the command.

8. If the step contains an automation, click on the automation and select Run to run the automation. The output will be displayed in a text box. If an automation produces JSON output, the JSON document is automatically formatted for better readability. Select Show information to see description, prerequisites and parameter values.

9. If you want to document your results of the runbook steps, expand the details section in the sidebar. Enter your comments into the My personal notes field.

10. You can pause or cancel the runbook by clicking Pause or Cancel. Pause temporarily suspends the execution of the runbook. You can pick up later from where you left off. You can use Pause to handle long running automations, for example automations that take longer than 10 minutes. The automations that are in progress continue while you pause the runbook. If you cancel the runbook you are prompted to select a reason. This feedback is helpful to the author of the runbook.

11. After you completed all steps, click Complete at the end of the runbook. Provide a rating, a comment (both optional), and a final assessment of the runbook. You are taken back to the Library page.

12. The Execution page provides the Runbook executions table. Here you can find all the runbooks that were executed with information such as Status, Start time, Version, Comments, if the runbook is automated, and actions. Filters such as user, runbook status, runbook type and runbook name can be applied. For more information, see “Executions” on page 391.

Chapter 7. Administering 377 Load and reset example runbooks IBM Runbook Automation provides ready-to-go sample runbooks. Load the examples to explore what you can do with runbooks.

Before you begin You can load example runbooks if you have the RBA Author or RBA Manager role. For more information about roles, see Creating users. Load examples Load all pre-defined examples. You can also edit the example runbooks and create new versions. 1. Open the Library page.

2. Click Load examples on the upper right of the window (below the Search field). 3. The sample runbooks are shown in the Library page. The runbooks will show up for Operators if they are published. The samples are marked by the text "Example:" 4. Click Preview to view the runbook and see what it does.

5. Click Run the steps of this runbook and run your first runbook. For more information about how to run a runbook, see “Run your first runbook” on page 376. Reset examples Follow these steps if you edited the examples and would like to reload the original version: 1. Open the Library page. 2. Click Load Examples to reload the original version of the example runbook. Any changes that were made will be overwritten. Convert a sample runbook to a regular runbook To save a sample runbook as a regular runbook: 1. Open the Library page.

2. On the runbook that you want to convert, click Edit. 3. Click Save as draft > Save As. 4. Change the runbook name by adding additional text or removing "Example:" from the name. You can optionally update the description. 5. Click Save. After saving, the runbook will no longer be tagged as an example. For more information about using the editor, see “Edit a runbook” on page 379.

Create a runbook Document your existing operational procedures using the runbook editor. Share the knowledge and experience that you documented in a runbook with your team members.

Before you begin You must have user access as an RBA Author or RBA Approver to access the Library page.

About this task Runbooks are created for one of the following reasons: • Operators document their daily tasks in documents. Runbooks allow you to document and store these procedures in an ordered manner. When a runbook is published the knowledge can be shared between all operators. • A new set of events has occurred and a couple of skilled operators work out a new procedure to solve the issue that caused these events.

378 IBM Netcool Operations Insight: Integration Guide Procedure 1. Click Library. 2. Click New runbook. The runbook editor opens. For more information about how to use the editor, see “Edit a runbook” on page 379. 3. Provide data to describe the runbook. Make sure that you use a Name for the runbook that describes what problem this runbook solves. 4. Provide a Description. The description contains information about specific situations that this runbook solves. Operators use the description to decide if a runbook meets their requirements. 5. Add Tags. These help to filter a large amount of runbooks, as well as group runbooks with the same content.

6. Start Inserting step separators. Click the icon in the icon palette of the editor. A horizontal line is added to the editor, marked as Step 1. 7. Click below the step separator and describe the first step of your procedure. Use commands, parameter, automations, or GOTO elements to distinguish between different types of elements within a runbook. Remember to always create a new step if you start to describe a new task that the operator can run. 8. When all steps have been documented, click Save draft > Save draft & close. 9. Test the runbook. Search for your runbook in the Library dashboard. Click Run the steps of this runbook to test the execution of the new runbook. Modify the runbook if necessary. 10. Publish your runbook. a) Click Edit to open the runbook editor. b) Click Publish. The runbook is now available to all operators.

Run a runbook Run all steps that are described in a runbook. When you have completed the runbook you can provide any comments you might have. To run a runbook you must be assigned the role RBA User, RBA Author, or RBA Approver. Follow the steps outlined in “Run your first runbook” on page 376.

Edit a runbook The runbook editor provides editing capabilities as well as runbook elements that help to document operational procedures. Runbook elements are designed to avoid mistakes. Open the runbook editor and click Library > New runbook to create a new runbook or scroll through the list of runbooks and click Edit to open an existing runbook in the editor. The editor consists of the following areas:

Chapter 7. Administering 379 Figure 13. Working areas of the runbook editor

Name The name identifies the runbook. Try to be as precise as possible and describe the problem that this runbook solves. Description Describe the problems that this runbook solves in more detail and sketch out the solution. Tags Provide tags to filter runbooks. Runbook ID This field is generated by Runbook Automation. The Runbook ID can be used to identify the runbook using the Runbook Automation HTTP REST Interface. Use the copy button to copy the ID to the clipboard. You can edit a runbook if you are an RBA Author or RBA Approver. For more information about roles, see Creating users.

Editing actions Use the editor to describe your daily tasks step by step. The editor provides editing actions and runbook specific elements to help you write the instructions. Use common text editing actions to edit and format your operator instructions.

Table 59. Editing actions Icon Editing action Undo. You can undo your last 20 editing actions.

Redo. You can redo your last 20 editing actions.

380 IBM Netcool Operations Insight: Integration Guide Table 59. Editing actions (continued) Icon Editing action Paste. You can copy text from any document and paste it into the editor. If you select Paste in the drop down menu, text is pasted into the runbook editor with the original format. If you select Paste as plain text, text is pasted into the editor as plain text. The original formatting is removed.

Find and Replace. Search for a text string within the runbook editor and replace it with another one. For example: Search for db2 and replace with DB2. Font. You can change the font that you use to any of the following: Arial, Comic Sans MS, Courier New, Georgia, Lucida Sans Unicode, Tahoma, Times New Roman, Trebuchet MS, Verdana. Size. Change the size of your font. You can start with font size 8 and go up to font size 72.

Bold. Select a text and make it bold by selecting b.

Italic. Select a text and make it italic by selecting i.

Text color. Select a text and then choose a color to change text color.

Text background color. Select a text and click the background color that you want.

Remove Format. If you paste text, the text gets pasted with the original format. If you want to remove the format, select the text with the unwanted format and click Remove Format. Left align. Select text that is spread over a couple of rows and click align left. All text rows are aligned left. You can also use this action to place text within one row. Align center. Select text that is spread over a couple of rows and click align center. All text rows are aligned center. You can also use this action to place text within one row. Right align. Select text that is spread over a couple of rows and click right align. All text rows are aligned right. You can also use this action to place text within one row. Align justified. Select text that is spread over a couple of rows and click align justified. All text rows are aligned justified. You can also use this action to place text within one row. Numbered list. Select text that is spread over a couple of rows and click numbered list. All lines are indented and numbered. Bulleted list. Select text that is spread over a couple of rows and click bulleted list. All lines are indented and bulleted.

Chapter 7. Administering 381 Table 59. Editing actions (continued) Icon Editing action Increase indent. Select text that is spread over a couple of rows and click increase indent. The whole text section is moved to the right. Decrease indent. Select text that is spread over a couple of rows and click decrease indent. The whole text section is moved to the left. Insert table. If you want to insert a table, click Insert Table. Enter the number of rows and columns that you need and specify the column with and width unit. Insert image. If you want to add a screen capture for example, select Insert Image.

Insert Link. If you want to insert a link, click Insert Link and enter in the URL field the link. Do not forget to provide the link text and decide whether you want this link to be opened in another tab.

Insert Special Character. If you want to add special characters, select a character from the palette provided.

For a description of the Runbook elements, see “Runbook elements” on page 382.

Runbook elements Runbook elements are editing actions that describe operational tasks. Use the add step, add command, add parameter, add automation, and add GOTO action to give your description a clear structure and supportive elements. Use the following runbook elements to edit your operator instructions.

Table 60. Runbook elements Icon Editing action

Add runbook step Click the editor canvas. Select Add runbook step. A separator with a label is added to the canvas. Describe one step and then add the next step by selecting Add runbook step. Keyboard shortcut Use the tab key to move the focus on the canvas. Press Ctrl + 3 to add a step. Remove step Use the backspace key to remove a step.

Add command Click the editor canvas. Select Add command and enter your command. Click Enter to quit the command and continue in the next row. Another way to enter a command is to mark the command and click Add command. As the command is still selected, you have to click on the canvas to deselect the command. Click Enter to quit the command and continue in the next row. Keyboard shortcut Use the tab key to move the focus on the canvas. Press Ctrl + 4 and begin typing your command. Remove command You can remove the command by placing the cursor in the command and clicking Remove.

382 IBM Netcool Operations Insight: Integration Guide Table 60. Runbook elements (continued) Icon Editing action

Add Parameter To add a parameter, go to the Parameter palette and drag the parameter that you require to the canvas. Or mark the text which should be a parameter name and click Add parameter. Keyboard shortcut Use the tab key to move the focus on the canvas. Press Ctrl + 8. A parameter menu is added. Click the arrow and select the required parameter. Remove parameter Select the parameter and click Remove. For more information about creating parameters, see “Adding parameters” on page 385.

Add automation To add an automation, go to the automation palette and drag the automation that you require to the canvas. A dialog opens to map parameter values. Click Apply to add the configured automation to the runbook. Keyboard shortcut Use the tab key to move the focus on the canvas. Press Ctrl + 7. A dialog opens with all available automations. Select an automation and click Add. A dialog opens to map parameter values. Click Apply to add the configured automation to the runbook. Remove automation Select the automation and click Remove. Change configuration To change the configuration of an automation, click the automation and select Configure. A dialog opens and you can change the parameter mappings. For more information about how to add and configure an automation, see “Adding automations” on page 384.

Add GOTO Click the editor canvas. Select Add GOTO. A goto element is added with the label END. Click the GOTO element and select the step that you want to jump to. You can select any subsequent. Keyboard shortcut Use the tab key to move the focus on the canvas. Press Ctrl + 5. The GOTO element is added. Remove GOTO Select the GOTO element and click Remove. For more information about how to use the GOTO element, see “Adding a GOTO element” on page 387.

Chapter 7. Administering 383 Table 60. Runbook elements (continued) Icon Editing action

Add collapsible details To add collapsible details, click Insert collapsible details. Enter a title for the collapsible section. Then enter the information that you want to collapse and expand in a section. Keyboard shortcut None. Remove collapsible details Use the delete or backspace key to delete collapsible details.

Adding automations Automations are used to run several steps automatically. Use the Automations page to create automations.

About this task The job of an operator can involve repetitive tasks. For example, "logon to DB2" or "start server dagobert45". Automations are scripts or IBM BigFix fixlets that document these steps. If you use automations within your runbook then the operator does not have to manually run these steps. They can run all steps with one click. For more information about how to create automations, see “Creating an automation” on page 392. The runbook editor displays any created automations in the Automations pane. Only one automation can be added to a runbook step. It is common practice for automations to contain parameters. When adding automations to a runbook you must decide how the values of the automation parameters are filled by creating a parameter mapping. Select from the following options: Use a runbook parameter A runbook parameter, which can be filled via a Trigger from an event or manually by an operator executing the runbook, is used to fill the specific automation parameter. When adding an automation to a runbook, you can select existing runbook parameters or directly create a new runbook parameter. Define a fixed value The automation added to the step of a runbook will always be launched with a fixed value for the parameter. (If the same automation is used in a different runbook – another fixed value could be defined there.) Use the default from the automation If the automation defines a default value for this parameter, it is possible to use that value. Use the output of a previous automation The output of an automation that was added to a previous step can be used as input for the automation of the current step. Use the logged in user The parameter value will be filled with the username of the user, who is logged in to Runbook Automation at the time when the runbook is run. Note: If the username contains the @ symbol, for example because the username is an email address, the @ symbol and all characters that follow it will be removed from the username.

Procedure 1. Search for the automation. Type in the search term that describes the automation that you need. 2. If your search results in several hits that look similar, click Preview to view the automation in more detail.

384 IBM Netcool Operations Insight: Integration Guide 3. Add the automation to your runbook. Add a step to your runbook and click Add next to the automation name in the Automation pane. You can also preview the details of an automation and then click Add to runbook within the preview dialog. The automation is added at your cursor position. If the cursor focus is not within the editor window, a new step is created at the beginning of the runbook with the automation as a step description. 4. Select the parameter mapping type. A window opens with automation details and the parameter definitions. In the Mapping column, select how automation parameters are to be filled with values. You can choose from the following options: New runbook parameter Add a parameter. If default values are provided, update the default values as required. Enter a new parameter name and description. Fixed value An entry field is provided to enter the parameter value. (Existing) runbook parameter Select the parameter from the list of existing parameters. Use default Use the default value set by the automation. Automation output Choose the automation from a previous runbook step. The output value of the automation will be used as the parameter value for the current automation. This option is only available if the runbook contains a previous step with an automation. Use logged in user Select this option to fill the parameter value with the username of the user who is logged in to Runbook Automation at the time when the runbook is run. Note: If the username contains the @ symbol, for example because the username is an email address, the @ symbol and all characters that follow it will be removed from the username.

What to do next Edit parameter configuration If you want to edit the parameter configuration of your automation, select the automation and click Configure. Change your settings and click Apply. Automation with errors If the automation is decorated with an error symbol, then the parameter settings are not correct. Select the automation and click Configure to correct the parameter settings. Delete an automation If you want to remove the automation, select the automation and press delete on your keypad or select the automation and click Remove.

Adding parameters Parameters are used as general placeholders for values that are used on a frequent basis. Examples of parameters are DATABASE_NAME, DIRECTORY, HOSTNAME, URL and so on.

Before you begin You can use runbook parameters in the following scenarios: • Parameters can be used as variables that get substituted by a value in the text of a runbook step, for example:

STEP 2 In this step you determine the longest running process of system $SYSTEM

• As input to a command defined in a runbook, for example:

STEP 5 Now issue the command: cd $DIRECTORY

Chapter 7. Administering 385 • Parameters of an automation can be filled with the value of a parameter defined for the runbook. • Automations of type BigFix and Script have a system parameter target which defines the target system where the BigFix Fixlet or script is run. • Automations of type Script have a system parameter user that specifies the UNIX username which is used to run the script on the target system. This parameter can be mapped automatically to the user who is currently logged in to Runbook Automation. For more information, see “Adding automations” on page 384.

About this task Operators enter values such as username, password, hostname, database name or URL on a frequent basis. If you have to deal with many different computer systems, you must remember numerous user names, passwords, host names or database names. In order to avoid typos or having to remember values, you can define parameters. For example, you can define a parameter DATABASE_NAME and either provide a default value or enter the value when you run the runbook. Default values are the best fit if you do not want operators to remember and enter the value. Parameters are local to the runbook. They are not available for other runbooks. Parameters are used within runbooks and for automations. Within runbooks, you can use parameters either standalone, for example URL, or within a command, for example connect to HOSTNAME. Parameters must be filled with values when executing a runbook. This can be done in the following ways: • When a runbook is launched by clicking the execute button from the Runbook Automation UI, an operator must first enter the parameter values. The operator is prompted with the name and description of all parameters defined for the runbook, as well as an optional default value. • When a runbook is launched from an event, a Trigger defines which values of the event data are mapped to which parameter of the runbook. Values that are entered in this way cannot be changed. Runbook parameter values can be changed during the execution of a runbook. In any step of a runbook, you can select the parameter section of the runbook in the sidebar on the left. You might need to expand the section first, if it is collapsed. The parameter can then be changed to a new value. To apply the new value click Update (the Update button is only displayed if the parameter value has been changed). Steps of the runbook that are not yet executed will reflect the new value. Runbook parameters can be used to define parameter values for automations within the same runbook. Parameter values for automations can be filled using the result of a previously executed automation. In this way, the parameter Automation output of a previous automation Find large file systems is an automatic parameter available for automations in the current step. It is possible to use the output of any automation that ran before in this runbook.

Procedure

1. Create a parameter. Click Add Parameter. A dialog box opens where you can enter the parameter name, the description, and the default value. If you select Not mandatory on start you can start the runbook execution without defining a value, and set the parameter value during runbook execution. This setting is useful if the parameter value is not available at the beginning of the execution, but during execution. When you enter the runbook parameter name you can use alphanumeric characters, characters from national languages, and some special characters. But do not use the special characters ampersand (&), backslash (\), space ( ), greater than (>) or less than (<) when you define runbook parameter names. As a result, a parameter of the runbook is created and can be used as a variable in the text of a step, as a variable value of a command, or as input of an automation added to a runbook step. 2. Use the parameter. Drag and drop a parameter from the parameter panel to the canvas. For more information about how to use parameters, see “Runbook elements” on page 382.

386 IBM Netcool Operations Insight: Integration Guide Runbook parameters can be dragged and dropped to a location in the text of a step, or to replace a part of a command that must be executed by an operator. Parameters of a runbook that are used by an automation must be changed. 3. Edit a parameter. Click the edit icon next to the parameter that you want to edit. 4. To edit a parameter used by an automation, click on the automation you have added to a runbook step and then click Configure.

Adding a GOTO element The GOTO element is used to skip steps or jump to the end of the runbook. Use the GOTO element in the following scenarios: The execution was successful and you can complete the runbook For example: 1. Restart the Application Server. 2. If this was successful, then go to the END. 3. Check your log files. Run optional steps as result of a check For example: 1. Log in to the server where you want to start the application server. 2. Check the current CPU consumption. If it is below 80% GOTO Step 5. 3. Find the process consuming the most CPU. 4. Determine the process. 5. Start the application server. 6. Check the log file for confirmation that the application server is started.

Configuring runbook creation Use these procedures to define the runbook creation process and control the publication of runbooks.

Configuring runbook approval Define whether to use the approval workflow or the direct publish workflow to make runbooks available for execution by operators.

Before you begin You must be assigned the manager role to configure the runbook creation process.

Procedure 1. Navigate to Library. 2. Expand the drop-down menu .

3. Click Settings next to the search field. 4. Select Approve runbooks before publish if you want runbooks to be approved before they are published. 5. If you want to enforce that an approval must be refreshed at a regular interval, select Set expiration date after xxx days and choose a time interval. If this option is enabled, the runbook approval will expire after the specified number of days and can no longer be executed until the runbook is reviewed, submitted for approval, and approved again. The deactivated approval process is described in “Create a runbook” on page 378. The activated approval process is described in “Runbook approval process” on page 388. Note: If the approval process is deactivated, all pending approvals are deleted.

Chapter 7. Administering 387 Runbook approval process The Runbook approval process offers control over the publication of runbooks. This is beneficial if a mandatory review process is necessary, if prematurely released runbooks could be harmful, or when auditable information on the publication is required.

About this task

Figure 14. Runbook approval process

Figure 1 illustrates the states and transitions of the runbook approval process. If the approval process is enabled, it is not possible to directly publish a runbook. Instead an option to submit the runbook is displayed. The following options are then available:

Procedure • List all runbooks that must be approved a) Open the Library page. b) From the Status filter drop-down menu, select Pending approval. c) All runbooks for which approval is pending are shown. • List all runbooks that are approved a) Open the Library page. b) From the Status filter drop-down menu, select Approved. c) All approved runbooks are shown. • Note: you can also filter for a status of Approval expired and Approval rejected. • Submit a runbook draft for approval The roles that can perform this action are Author, Approver, and Manager. a) Create or edit a runbook and click Submit for approval. b) Enter the name of the approver in the field provided and click Send. The runbook is now in a pending state. It can no longer be edited, except for changes to its state. • Cancel a pending runbook approval The roles that can perform this action are Author, Approver, and Manager. a) Locate the runbook on the Library page. b) Click Approve or reject this runbook. c) Click Remove approval.

388 IBM Netcool Operations Insight: Integration Guide The runbook is now in normal draft mode and can be edited. The assignee can no longer approve or reject the runbook. • Approve a runbook Only the assigned person from the submission can approve a runbook. The roles that can perform this action are Approver and Manager. a) Locate the runbook on the Library page. b) Click Approve or reject this runbook. c) Click Approve. The runbook now has a status of approved, which is the default for execution. New drafts can be created. Information pertaining to the approval assignee and timestamp is stored within the runbook. • Reject a runbook Only the assigned person from the submission can reject a runbook. The roles that can perform this action are Approver and Manager. a) Locate the runbook on the Library page. b) Click Approve or reject this runbook. c) Click Reject. The runbook now has a status of rejected and can be edited again. Information pertaining to the approval assignee and timestamp is stored within the runbook until a new draft version is created and the old information is deleted.

Runbook versions You can use versioning to create incremental improvements to runbooks and monitor the success of changes. Subject Matter Experts can continuously improve the quality of runbooks by using the comments provided and available run metrics. As a result different versions of a runbook will be created. All available runbooks are listed on the Library page. Click the arrow at the beginning of the row to display all the versions of a runbook. Draft version The Draft version is a work-in-progress runbook. It is marked with a draft flag. A draft runbook is not yet published and does not run in production. It is not visible to any operator. You can run and test the current draft. Latest published version The latest published version is the version that is running and used by all operators. This is the version that the operations analyst can preview and work with. Archived version Archived versions are previously used versions. They are not in production and cannot be used by any operations analyst. The metrics of the previous versions help the Subject Matter Expert to monitor continuous improvements of the runbook. You can run archived version to learn why this runbook did not run successfully. You can create different versions of a runbook if you are an RBA Author or RBA Approver. For more information about roles, see Creating users.

Save and publish a runbook Creating a runbook involves documenting complex steps. These steps must be reviewed and tested before you can publish the runbook for use in production. After a runbook is published and used in production, you might improve it based on the feedback that you receive from comments, success rate, and ratings. The runbook editor provides the following actions: Cancel Cancel closes the runbook editor without saving any changes. Save as Use this action to save a copy of the runbook using a different runbook name.

Chapter 7. Administering 389 Save draft Save draft saves the updates that are added. The editor remains open. Save draft is useful if you are editing a runbook and you want to take a break and continue later. Or you just want to save your changes as you go. Save draft & close Save draft & close saves your changes and closes the editor. You use this option if you need to work on a different item. Publish This button is only available if the approval process has been deactivated. You can publish a runbook if you are an RBA Approver. For more information about roles, see Creating users. After a runbook has been published it can be used by all operators in a production environment. Submit for Approval This button is only available if the approval process is enabled. If the approval process has been activated, see “Runbook approval process” on page 388 for more information about approval before publication.

Filtering runbooks Filter by status, groups or tags to easily locate runbooks.

Procedure • Click Library. • To filter for runbook status, select one or more items in the Status filter. The filter matches any selected status (or condition). Note: Filtering for status Draft will display runbooks that have never been published. Draft versions of already published runbooks won't be displayed. • To filter for groups, select one or more items in the Group filter. The filter matches any selected group (or condition). • To filter for tags, select one or more items in the Tag filter. All tags must match to apply the filter (and condition). • If a combination of several filters are used, all filters must match to apply the filter (and condition). • Additionally, you can filter by runbook name using the Search field. The search field allows to search using regular expressions. If you want to search for special characters that are used by regular expressions, for example plus (+), asterisk (*), question mark (?), brackets ( () [] {} ), or backslash (\), you must escape those characters. For example, if you want to search for (T), you must enter \(T\) as the search string. Note: The filter drop-down list displays up to 50 available filtering options. You can start typing the name of the filter option that you are looking for. As you type, the list will display only items that contain the typed text.

Results Only runbooks that match the specified filter are displayed.

Delete a runbook Users assigned the RBA Manager role can delete a runbook. To delete a runbook: 1. Open the Library page. 2. Use the check boxes to select one or more runbooks. 3. An action bar is displayed showing the available actions and the number of runbooks selected.

390 IBM Netcool Operations Insight: Integration Guide 4. On the action bar, click Delete. If you delete a runbook, all versions and the execution history of the runbook are deleted as well. For more information about roles, see Creating users.

Executions See recently executed runbooks, including runbooks in progress. If there are more than 1000 runbook instances, the latest 1000 runbook executions are shown. You can filter your list on the Executions page by the following criteria: User who has executed the runbook Select the User filter and select one or more users. All runbook executions for the selected users are shown. For example, if you want to see only the runbooks that you have executed, select your user name. Runbook name Select the Name filter and select one or more runbook names. All runbook executions for the selected runbook names are shown. Runbook execution status Select the Status filter and choose a status such as Success, Failed, Cancelled, In progress, or Completed. For example, you can find all runbook executions that are paused by selecting In progress. When you are ready to return to the task you can resume the runbook. Select status Canceled if you want to see all canceled runbooks and the reason why they have been canceled. Runbook type Select the filter Type and select type Manual or Triggered automatically. Fully automated runbooks are runbooks in which each step contains an automation. An operator does not need to interact with the runbook. Automated runbooks can be mapped with events by creating a trigger. To see the automated runbooks that are started by a trigger, click Triggered Automatically. For more information about how to create automations, see “Automations” on page 391. For more information about how to create a trigger, see “Triggers” on page 400.

Monitor runbook history You can monitor the runbook history if you are an RBA Author or RBA Approver. 1. Click Library. 2. Select a runbook.

3. Click History. The execution history table and runbook details such as description, type, rating, success rate, and execution statistics are displayed. 4. Use the filter box on the upper right corner to display only specific runbooks, for example: a. To see runbook executions of the last seven days only, select Last 7 days. b. To see runbook executions that are still in progress, select In progress. If you want to resume a runbook execution, click Resume. For more information about roles, see Creating users.

Automations Create an automation to summarize and automate several steps into a single step. In runbooks, an automation is the collection of several manual actions into a single automated entity. Automations use the parameters of the runbook. Customize the parameters for the execution of the runbook. Parameters lower the time needed for the execution of a runbook. Automations eliminate the risk of manual errors that you get repeating the same steps many times. The Service Delivery Engineer can provide an automation to replace frequently used steps with a single click. As the automation needs a target system where it is processed, an existing connection to the local

Chapter 7. Administering 391 environment is required before automations can be added. For more information about how to set up a connection, see “Create a connection” on page 246.

Creating an automation Create an automation by defining the automation type and configuring the parameters and fields. Automations are routines that run a task automatically. There are different types of automations depending on how tasks are run. Runbook Automation supports automations of type script, BigFix and HTTP. For automations of type script and BigFix a connection needs to be configured. To learn how to configure a connection, see “Create a connection” on page 246. You must be logged on as an RBA Author to create automations.

Creating Script automations You can create an automation of type script by using the SSH provider. The Operating System of the endpoint system can be either UNIX (including AIX and Linux) or Windows. For UNIX, the SSH provider executes a script using the default shell. Alternatively, a shebang (#!) can be used to execute the script in any scripting language the target system can interpret. The SSH provider does not require an additional agent as it uses a direct remote execution connection via SSH. For Windows, the SSH provider can execute a Powershell script. Note: • To access the system via SSH you must set up an SSH server on the Windows system, for example OpenSSH for Windows Server 2019 or Windows 10. The Windows System must have Powershell installed as the default shell of the SSH server. • If you are using a jumpserver, the jumpserver must be a Linux system. You cannot use a Windows system as a jumpserver. You must install the IBM Secure Gateway if you are using Runbook Automation in the cloud. The IBM Secure Gateway is not required for a Private Deployment of Runbook Automation. Complete the following steps to enable script automations: 1. Create a Script connection, see “Connections” on page 245. 2. Create an automation of type script. Click Automations > New Automation. Complete the following fields: Type Select Script. If you want to run a script as an automation, you must configure a Script Automation Provider on the Connections page. Name Provide a name that describes what this automation does. For example, Find large file systems. Prerequisites If this automation requires prerequisites, add this information. For example, DB2 Version 10.5 or higher. Description Provide any helpful additional information so that the user can immediately understand which problem this automation solves. Shell Select Bash for scripts that run on UNIX systems. Select Powershell for scripts that run on Windows systems. Script You can either select Import to select the script from your file system, or you can directly enter the script into the editor. If you selected Bash the script can only be run on UNIX systems using the

392 IBM Netcool Operations Insight: Integration Guide default shell or the interpreter, which is specified after the shebang #!. If you selected Powershell the script can only be run on Windows systems using Powershell. Note: imported scripts must be in UTF-8 format if they contain non-ascii characters (for example, Chinese). Parameters Add input parameters to run the automation script. Those input parameters are available to the script as environment variables. For example, a parameter filesystem is used in a Linux script as $filesystem. The $ sign is automatically added to distinguish user-defined parameters from system parameters such as target and user. The parameter is exported without the dollar sign. Note: input parameter names must consist of alphanumeric characters and the underscore character only. The following system parameters exist: target Mandatory parameter. Applies to automations of type BigFix and Script. The target parameter is created to define the target machine where the Bigfix Fixlet or script is running. user Mandatory parameter. Applies to automations of type Script. The user parameter defines the UNIX or Windows username that is used to run the script on the target machine. Note: If the value of the user parameter contains the @ symbol, for example an email address, the @ symbol and all characters that follow it are ignored. 3. Add the automation to the runbook, see “Adding automations” on page 384. 4. Run the runbook, see “Run a runbook” on page 379.

Single and Multi Target Automations You can use single target or multi target automations to execute script automations.

Single Target The parameter with the name target has special meaning to script automations as it defines the system on which the automation will run. The system can be defined by a short hostname, a FQDN, or an IP address. When executed a script automation will connect to the system identified by the target and execute the script there. The execution result of the script will be reflected in the status of the automation. Possible automation states are executing, successful, unsuccessful, failed, and unknown. In the case of a fully automated runbook execution, this status will also be used to decide whether the runbook will be canceled or continued. Examples: • An automation which receives prod-server1, as the content of the target variable will execute this automation on the system prod-server1. The script exited with return code of zero and the state of the automation instance is set to successful. • An automation which receives 192.168.55.56, as the content of the target variable will execute this automation on the system with the matching IPv4 adress. The script exited with a non-zero return code and the state of the automation instance is set to unsuccessful. • An automation which ran in step 1 of a fully automated runbook failed to reach the endpoint (for example because the server was not available) and the automation could not be executed at all. The status of the automation instance is failed. This status is reflected in the status of the execution. The execution will stop after the first step and report a failure.

Multi Target Automations The other mode of operation of an automation is the Multi Target Automation (MTA). In order to execute an automation as an MTA, specify the target string in the following format: [ $target1, $target2, …, $targetN ]. The following rules apply to MTAs:

Chapter 7. Administering 393 • The target string must begin with a left square bracket ( [ ) and must end with a right square bracket ( ] ). • Between the square bracket define a list of systems separated by comma (,). • Duplicated entries are detected and ignored. This means specifying the same target multiple times will have no effect. • An empty list is also allowed. Multi Target Automations allow you to execute the same automation on any number of targets in parallel, removing the necessity to execute the same automation with different targets sequentially. If the target is specified in such a way, how the automation is executed and how the results are treated will be different. The following criteria apply to MTAs: 1. For every entry in the comma-separated list, the script will be executed on the specified target. Note: you cannot specify the same target more than once. 2. All actions and information are summarized in one log. This log follows a specific format, see “Output format of Multi Target Automations” on page 394. 3. The status of the execution will be executing as long as at least one execution is still ongoing and successful once all script processes are finished. The result will be successful even if some or even all automations reported unsuccessful, failed, or unknown. 4. Specifying an empty array is allowed. This special case is called a Conditional Automation. No actual execution occurs, but an execution record is still created. The record will always report successful as its state. If the runbook is executed fully automated, the runbook instance will proceed to the next step. Examples: • An automation receives [prod-server1] as the target. It will execute the script on the system prod- server1. The script exited with a non-zero return code. The status of the execution is successful. The output of the automation instance contains information that this script execution was unsuccesful on prod-server1. • An automation receives [prod-server1, prod-server2, prod-server3] as the target. It will execute the script on three systems in parallel. All scripts exit with zero as the return code. The output and status of each of the three script executions is written to the output. The status of the overall execution is successful. • An automation receives [] as the target. This triggers the conditional automation workflow No execution will take place. The execution record will have an almost empty output and the status will be successful.

Output format of Multi Target Automations Despite being executed on any number of endpoints, the results of Multi Target Automations (MTA) are consolidated into one execution record. This record combines the information from all executions. The output is formatted as follows: • If the script produces output the output is displayed after two prefixes. The first item is the name of the target. The second item is either out or err depending on whether the output was written to stdout or stderr. • The log will group the output by target. So the complete output of the first target will be displayed before the output of the second target is displayed and so on. This means the output is not ordered chronologically, although the output by target is. • At the end of the output a summary is always present. The summary consists of three additional lines. They follow the pattern $status : [ $listOfTargetsWithThisState ]. Where $status is one of the following: unsuccessful, successful, or failed. • If the target array is an empty array, the output will only contain the summary.

394 IBM Netcool Operations Insight: Integration Guide Example An automation has been executed with the target [prod-server1, prod-server2]. The execution on prod- server1 was successful, while the execution on prod-server2 was unsuccessful. The output will look as follows:

prod-server1 out: Some output from the script to stdout happening at 08:34 prod-server1 out: More output from the script to stdout happening at 08:36 prod-server1 out: Script prints more to stdout prod-server1 status: successful

prod-server2 out: Some output from the script happening at 08:35 prod-server2 out: More output from the script happening at 08:37 prod-server2 err: Script prints output concerning a failure to err prod-server2 status: unsuccessful

unsuccessful: [prod-server2] successful: [prod-server1] failed: []

Comparison of Single and Multi Target Automations The following table outlines the difference between the operation modes:

Table 61. Comparison of Single and Multi Target Automations Aspect Single Target Multi Target Automation Number of parallel executions One Number of entries in target array Final execution status Equal to status of automation Always successful Failure during fully automated Aborts the FARB Proceeds the FARB runbook (FARB) execution Specifying an empty target Failure to execute automation Enters special case for empty array

Creating BigFix automations If you want to use an automation of type BigFix, install the required components to run the automation. You must complete the following steps before you can use an automation of type BigFix: 1. Create a Script connection, see “Connections” on page 245. 2. Create an automation of type BigFix (you need to know site ID, fixlet/task ID, and action ID). Click Automations > New Automation. Complete the following fields: Type Select IBM BigFix. If you want to run an IBM BigFix fixlet or task as an automation, you must configure an IBM BigFix Automation Provider on the Connections page. Name Provide a name that describes what this automation does. For example, Find large file systems. Description Provide any helpful additional information so that the user can immediately understand which problem this automation solves. Prerequisites If this automation requires prerequisites, add this information. For example, DB2 Version 10.5 or higher. Site Name The Site Name is the location where the fixlet or task runs. For example, BES Support.

Chapter 7. Administering 395 Fixlet ID Provide the numerical ID of the fixlet or task. For example, 168. Action Provide the action that you want to run within your fixlet or task. For Example, Action1. Target machine Mandatory field. The Target machine parameter is created to define the target machine where the fixlet is running. Each automation must define a target machine. Parameters Add input parameters to run the IBM BigFix action. Those input parameters are available to the fixlet as environment variables. For example, a parameter filesystem is used in a Linux fixlet as $filesystem. The $ sign is automatically added to distinguish a user-defined parameter from a system parameters such as TARGET. The parameter is exported without the dollar sign. For Windows fixlets, parameters are available in the fixlet with the percentage sign, for example %filesystem%. 3. Add the automation to the runbook, see “Adding automations” on page 384. 4. Run the runbook, see “Run a runbook” on page 379.

Creating HTTP automations The HTTP automation provider allows to send HTTP requests to a specified web service. The specified web service must be located in the public internet. It is not possible to connect to your private data center. The specified web service can be in your private data center or in the public internet provided the RBA server can reach the destination. Ensure that the network setup, for example firewalls can reach the API endpoint of the web service. 1. Click Automations > New automation. Complete the following fields: Type Select HTTP. Name Provide a name that describes what this automation does. For example, IBM Watson Translate. Description Provide any helpful additional information so that the user can immediately understand which problem this automation solves. Prerequisites If this automation requires prerequisites, add this information. For example, Watson service credentials are required. API endpoint Specify the http API endpoint of the web service to which the HTTP request will be sent to. For example https://gateway.watsonplatform.net/qagw/service/v1/question/1? TestIt. METHOD Choose the HTTP method. For example POST. Username If basic authentication is required to use the web service, specify the API user name for basic authentication. Password If basic authentication is required to use the web service, specify the API password for basic authentication. Accept Specify the accept request header. The accept header is used to specify the media types which are acceptable for the response of the request. For example text/html.

396 IBM Netcool Operations Insight: Integration Guide Accept-Language Specify the accept-language request header which is used to restrict the set of languages that are preferred for the response of the request. For example en-US. Additional headers Optionally, specify any additional request headers that are needed for the request. For example accept-charset: utf-8. Ignore certificate errors Select this check box to ignore certificate errors. Use this option only for test purposes. In production environments, ensure that the correct certificates are installed on the target web service. Parameters Add input parameters to run the automation. Those input parameters can be referred in the entry fields. For example, a parameter text can be referred as $text. 2. Add the automation to the runbook, see “Adding automations” on page 384. 3. Run the runbook, see “Run a runbook” on page 379.

Ansible Tower Automations The Ansible Tower Automation Provider allows you to create "Ansible Tower" type automations. These automations reference an existing asset from a connected Ansible Tower installation. Ansible Tower Automations can reference the following: • Job Templates • Workflow Job Templates Unlike Script Automations, Ansible Tower Automations do not define new automation content inside IBM Runbook Automation. Instead they point to an existing automation, present in the Ansible Tower server, which is a job or job workflow template. When an Ansible Tower Automation runs, it calls the specific "launch" operation of that template on the connected Ansible Tower server. If the template requires prompting for specific variables, these variables are sent by Runbook Automation. Runbook Automation allows these variables to be defined as Automation parameters. See the topics that follow this section for more information about Ansible Tower Automations.

Connecting an Ansible Tower server

About this task Complete the following steps to create a connection to an Ansible Tower Server.

Procedure 1. Open the Connections tab. 2. Click Edit on the Ansible Tower card. 3. Enter the base URL of your Ansible Tower server. This URL must contain the protocol, for example: https://ansible-tower-server.mycompany.com:443. 4. Choose an authentication type. You can select Basic Auth to connect with user name and password or API Token to use a bearer token, previously created with Write Scope in Ansible Tower. 5. Enter the chosen authentication information. 6. Optional: Enter the Ansible Tower server certificate or certificate chain. 7. Click Save to store the connection information. Note: When using the standard Ansible Tower installation a self-signed certificate issued for CN localhost might be generated. Make sure to replace that certificate with a certificate issued for the actual domain name you will be using. Otherwise the connection might not work.

Chapter 7. Administering 397 Job Template parameters and surveys In Ansible Tower a Job Template or Workflow Job Template can define a number of parameters and properties. They can be defined directly in Ansible Tower’s template itself, or can be set to "prompt on launch". Every parameter that's set to be prompted must be supplied during template launch by IBM Runbook Automation. In the case of an Ansible Tower Automation executed through Runbook Automation, this translates to automation parameters. Job Templates in Ansible Tower can accept input for the following parameters: • diff_mode (type: boolean) • extra_vars (type: string in JSON format) • inventory (type: integer) • credentials (type: array of integers) • limit (type: string) • verbosity (type: integer; range 0-5) • tags (type: string) • skip_tags (type: string) Workflow Job Templates in Ansible Tower can accept input for the following parameters: • extra_vars (type: string in JSON format) • inventory (type: integer) Additionally, you can define any number of additional parameters to be prompted from the user through the "survey". Answers to the survey are written into the extra_vars either in YAML or JSON format. Creating an Ansible Tower Automation always requires you to define parameters for any variables set to "prompt on launch". Runbook Automation prevents the creation and execution of Ansible Tower Automations where the input parameters do not match the set of parameters set to "prompt on launch". If a referenced Job Template or Workflow Job Template changes in this regard, you must adjust the referring Ansible Tower Automations as well. If a parameter is set to "prompt on launch" in Ansible Tower, but also defines a value, Runbook Automation will import Tower’s default value into the automation parameter. During execution of Ansible Tower Automations make sure to pass the variables in their proper data format. For example, Ansible Tower expects the inventory to be an integer referencing an inventory definition. If you enter a string which cannot be converted to such an integer, the execution will fail with an error. The same can apply for surveys, as surveys can define any number of variables in different formats. If the survey requires prompted values, a good practice is to include instructions in the runbook itself. Recommendation: Runbook Automation uses automation definitions as mappings to Ansible Tower Job Templates and Workflow Job Templates. It is recommended to define defaults in Ansible Tower and only prompt for parameters which are dependent on the runbook execution, rather than using Runbook Automation as the primary tools for parameter definitions. For more details on parameters, see the Ansible Tower Documentation for Job Templates or Workflow Job Templates.

Creating Ansible Tower automations

Before you begin To create an Ansible Tower Automation you must set up a connection to the Ansible Tower.

About this task Complete the following steps to create an Ansible Tower Automation.

398 IBM Netcool Operations Insight: Integration Guide Procedure 1. Open the Automations tab. 2. Click New automation. 3. From the Type drop-down menu, select Ansible Tower. 4. Choose values for "name" (required), "prerequisites" (optional), and "description" (optional). 5. 5. Select either Job Template or Workflow Job Template as the Template Type. 6. Select the asset that you want to reference as the template. This list is directly retrieved from the connected Ansible Tower server and shows the template names for the selected template type. You can enter text to search and narrow down the displayed items. 7. Runbook Automation automatically populates the list of parameters for this template. Optionally, you can edit each parameter to provide a description, a default value, or both. 8. Click Save to create the Ansible Tower Automation.

Changing Job Template Parameters after an Ansible Tower Automation has been created

About this task If you change a job template, for example select or deselect prompt on launch for a parameter or change the survey, the change does not automatically get propagated to Runbook Automation. Complete the following steps to ensure that changes are picked up in Runbook Automation.

Procedure 1. Edit the automation that refers to the Ansible job template. 2. Reselect the job template. This ensures that the new parameters will be picked up automatically. 3. Optionally, review the parameters and their default values. 4. Save the automation. 5. Edit the runbook that refers to the changed automation. 6. Edit the automation configuration and ensure that the parameter mapping is configured correctly. 7. Save and publish the runbook.

Testing an automation As Service Delivery Engineer, you can test an automation before you use it within a runbook. This allows you to quickly develop and test automations.

Procedure 1. Click Automations. 2. Select an automation and click Test to start the test procedure for the automation. 3. Follow the instructions on the Testing automation page. 4. Verify the result of the automation. 5. You can test the automation multiple times. 6. Parameter values can be modified between test runs of the automation. 7. When you are finished testing the automation, click Close.

Managing automations As Service Delivery Engineer, you are interested in how the automations run and how to keep them running smoothly. Use the Automations page to test automations, view statistics, and administer the available automations. The following types of runbook contain automations:

Chapter 7. Administering 399 Semi-automated runbooks Runbooks that contain one or more automations. These runbooks consist of manual and automated steps. Fully automated runbooks Runbooks that contain an automation in each step. If fully automated runbooks are started by a trigger, you can find their status on the Execution page. Select the filter Type and select Triggered automatically. On the Automations page, you can find all available automations with the following information displayed in the table: Name Name and description of the automation. Type Automations are either of the type script or IBM BigFix, depending on the connection type used. Invocations The number of times the automation has been used in runbooks. Success rate How reliably did this automation run so far? The success rate indicates how well an automation ran. Last modified A timestamp of when the automation was last modified. Actions As a Subject Matter Expert you can Preview , Test , Edit , Copy , and Delete an automation.

How is the success rate of an automation calculated? An automation can have the following exit states: • failed - automation could not even be started, for example because of an invalid hostname, unsuccessful authentication, or incorrect parameters (in case of an Ansible Tower automation). • unsuccessful - automation could be started but has a return code indicating an error, for example for ssh automations the exit code is not 0, for http automations the http return code is >= 400. • successful - automation could be started and has a return code indicating success, for example for ssh automations the exit code is 0, for http automations the http return code is <400. The success rate indicates the quality of an automation. For example, for ssh automations it indicates the quality of the corresponding scripts. For Ansible Tower automations, the success rate indicates the quality of the corresponding Ansible playbooks. It does not indicate if the configuration is correct. For example, if an ssh automation cannot be executed because of an invalid hostname or authentication failure, you will see this in the success rate of the runbook, but not in the success rate of the automation. Therefore, the success rate is calculated using the number of unsuccessful and successful automation executions. It does not include the number of failed automation executions.

Triggers If you have events that always correspond to the same runbook you can create a trigger and link the event with the runbook. Triggers can run with manual and automated runbooks. If the runbook is a manual or semi-automated runbook, the operator must complete the parameter values. If the runbook is fully automated, the trigger runs with pre-defined values. The operator does not even notice that the runbook was executed. You must install Netcool/Impact to run the trigger service. For more information, see “Installing Netcool/ Impact to run the trigger service” on page 249.

400 IBM Netcool Operations Insight: Integration Guide Create a trigger Map existing runbooks to Netcool/Impact events. Triggers can run manually or automatically, depending on the type of runbook that is mapped to the event. You can create a new trigger if you are an RBA Author or RBA Approver. For more information about roles, see Creating users.

Before you begin You must create an Event Trigger connection before you can work with triggers. For more information, see “Event Trigger (Netcool/Impact)” on page 246.

Procedure 1. Open the Triggers page and click New Trigger. The Add Trigger dialog opens. 2. Enter the Name and Description of the Trigger. Name is a required field. Describe what kind of events this trigger is used for. What problem that is reported by an event does the mapped runbook solve?

Warning: The description must not contain newline characters.

3. Click Add Filter to create a filter for the events. You can click Add Filter after you have entered the Name. 4. Create at least one filter or combine many filters with each other. The following filter types are provided: Event Summary A simple way to filter events. Define a filter that is based on the Summary field of the OMNIbus event. Select an operator from the drop-down list. Then, enter a text string. For example, define Event Summary: equals DB2. This filter matches all events where the value of the Summary field is equal to DB2. Event Filter Select one or multiple predefined filters. Use Search if you want to filter the list of predefined filters. Hover over the event filter to understand what the actual filter query looks like. For example, hover over the Critical Alarms filter. The hover displays Severity = 5. For instructions on how to modify this list, see Edit the list of predefined event filters. Attribute Filter Create your own filter. Enter a description for the new filter that you want to create. Select an attribute, operator, and enter a value. Examples for each attribute are displayed below the Value field. Generally, it is recommended to use the "equals" operator whenever possible. If you need more flexibility, you can use the "like" operator on an attribute that supports strings as values. The following tips give some ideas for writing value patterns with the "like" operator: • If you only specify a string pattern, then all strings that contain the string pattern will match. For example, if you define the value pattern "Status", then "TriggerStatus", "MyStatusLogger", "Status", and "StatusLogger" match, but "MemStat" does not match. • Value patterns are case sensitive. • You can use "[abcd]" or "[a-d]" to match any (single) character in the square brackets or in the defined range of characters. • You can use the regular expression quantifiers *, +, ?, for example: "[a-d]*" or ".+". For a full description of the value patterns that can be entered in the value field, see the NETCOOL regular expression library in the Netcool/OMNIbus Knowledge Center. Click Save rule to add the new filter to the list. Click Save. All filters that are selected or created are displayed.

Chapter 7. Administering 401 Choose whether events and filters are mapped with AND or OR. For example, if you selected AND, then the mapping is processed as follows: Event1 is mapped with filter1 AND filter 2 AND filter 3. If you selected OR, then the mapping is processed as: Event 1 is mapped with filter 1 OR filter 2 OR filter 3. Note: As a best practice recommendation, define the filters in a way that they do not overlap with the filters that you already defined within other triggers. In most cases, it is best to use the "equals" operator to describe the exact match that you are looking for. If filters from multiple triggers match to a single Netcool/OMNIbus event, then it is undefined which one of the triggers takes effect, and which one of the related runbooks is associated with the event. Deprecation notice: The following filter operators on fields that have strings as a value will be deprecated in future releases: • contains • does not contain • greater than • greater than or equal • in • less than • less than or equal • not in • not like The following operators on fields that have numbers as a value are deprecated: • in • not in You can check your filter definitions and the resulting consolidated filter expression in the Netcool/ Impact UI. Click Services > RBA_EventReader > Edit and switch to the Event Mapping tab. Click Analyze Event Filter Mapping. The Filter Range Overlap Analysis Result table shows which filters are defined, and which filter(s) they overlap with. This table should have a green background color for all fields. For fields that are not green, investigate the reason, and - if possible - adjust the triggers and filters such that the analysis result becomes green again. Also note that if you have disabled a trigger in the Runbook Automation UI the related filters are still listed as active in the Netcool/Impact UI. All disabled triggers are skipped during the detailed event analysis phase. You can click View Sample Events if you want to know the outcome of your selected filters. You can close the Sample Events window by selecting the cross icon in the upper right corner. 5. Click Select Runbook to map a runbook to your event filter. 6. Select Runbook Type and choose if you want to list automatic or manual runbooks. The difference is that automatic runbooks do not require any manual interaction from the operator. If the event occurs, the runbook runs automatically. For manual runbooks, the runbook is added to the event, so that the operator can launch to the runbook from the event viewer. The drop-down list displays up to 50 available runbooks. You can start typing the name of the runbook that you are looking for. As you type, the list will display only runbooks that contain the typed text. Select the runbook and click Save. 7. The parameters of the selected runbook are displayed. Select how you want to enter the parameter value: From event The parameter value is contained in the event. If the runbook is started, the parameter value is used from the event. Manual If the runbook is started, the operator manually enters the parameter values.

402 IBM Netcool Operations Insight: Integration Guide You can also enter a pattern to extract a subset of the parameter value by entering a regular expression. This is optional. The regular expression syntax is based on the Perl regular expression syntax. The syntax is the same as that used in Netcool/Impact. The following conditions apply: • If the pattern field is empty, then the complete value of the selected event property is copied into the runbook parameter. • If the pattern field contains a regular expression, the RExtract tool with the default behavior for the "Flag" parameter is used ("true"). For more information about that tool, see the Netcool/Impact product documentation RExtract. • Tips for writing regular expressions: – Always use groups. For example, specify (\w{4}) instead of \w{4}. – Specify the regular expression itself without adding any modifiers. The "global" modifier gets applied automatically by RExtract. For example, specify (\w{4}) instead of /(\w{4})/g. – The regular expression (.*) matches no result data, which is the default behavior of RExtract. – If there are multiple matches, the last match is returned. For example:

Value: omnibus.service.netcool Pattern: ([a-z]{5}) Result: netco

Note: In theory, there are three matches: {omnib, servi, netco}. – If there are multiple concatenated groups, then the first group of the last match is returned. For example:

Value: omnibus.service.netcool Pattern: ([\w]{4})([a-z]) Result: netc

Note: In theory, there are three matches: {omnib, servi, netco}. The last match is "netco", which is divided into the two groups "netc" and "o". – If you copy a regular expression from a document, ensure that all characters are copied as plain letters. For example, use the circumflex accent ^ (U005E) instead of the modifier letter circumflex accent ˆ (U02C6). Restriction: If you have IBM Tivoli Netcool/Impact V7.1.0.16 or lower installed: Escaped backslash characters (\\) must not be contained in the pattern. If you have IBM Tivoli Netcool/Impact V7.1.0.17 or higher installed: The pattern can contain escaped backslash characters (\\). Note: If you change the corresponding runbook and, for example add or remove a parameter, the parameter mapping will no longer work. Ensure that you edit the trigger again and update the parameter mapping. 8. Select Enable this trigger to run if you want this trigger to be used. Then, the trigger is added to the Triggers table with the flag Enabled. If you do not want to use this trigger, clear the check box. The trigger is added with the flag Disabled. 9. Select Run this runbook automatically. This option is only available for automatic runbooks.

Triggers Add, edit, delete, or check if your trigger is disabled or enabled. The following functions are available on the Triggers page: New Trigger Click Add Trigger to open the Add Trigger dialog and configure your trigger. For more information, see “Create a trigger” on page 401. You can add a trigger if you are an RBA Author or RBA Approver. Edit Trigger In the Triggers table, click Edit to change your current settings.

Chapter 7. Administering 403 Delete Trigger If this trigger is no longer valid, click Delete. You can delete a trigger if you are an RBA Manager. Check if the trigger is enabled or disabled The Manage Trigger table displays the current status of a trigger (disabled or enabled). To enable a trigger, click Edit and select Enable this trigger to run at the bottom of the dialog. Check if an automatic or manual runbook is used You can find out if the trigger is mapped to an automatic or manual runbook (Automated Runbook Yes or Automatic Runbook No). Check if the runbook that is assigned to the trigger still exists The Manage Trigger table displays any triggers that are broken because the assigned runbook has been deleted. For more information about roles, see “Administering users” on page 353.

Administering on-premises systems Perform the following tasks to administer the components of your on-premises Netcool Operations Insight components.

Administering Event Analytics Event Analytics allows you to identify seasonal patterns of events and related events within their monitored environment.

Seasonal events Event Analytics uses statistical analysis of IBM Tivoli Netcool/OMNIbus historical event data to determine the seasonality of events, such as when and how frequently events occur. The results of this analysis are output in both reports and graphs. The data that is presented in the event seasonality report helps you to identify seasonal event patterns within their infrastructure. For example, an event that periodically occurs at an unscheduled specific time is highlighted. Seasonal Event Rules are grouped by state in the Seasonal Event Rules portlet. You can • Use the View Seasonal Events UI to analyze seasonal events and associated related events. • Deploy validated seasonal event rules, without writing new code. Rules that are generated in this way can have various actions applied to them.

Related events Event Analytics uses statistical analysis of Tivoli Netcool/OMNIbus historical event data to determine which events have a statistical tendency to occur together. Event Analytics outputs the results of this statistical analysis as event groups, on a scheduled basis. You can: • Use the related events UI to analyze these event groups. • Deploy validated event groups as Netcool/Impact correlation rules with a single click, without the need to write any code. Correlation rules that are generated in this way act on real-time event data to show a single synthetic event for the events in the event group. • Present all events in the group as children of this synthetic event. This view decreases the number of events displayed to your operations staff in the Event Viewer. • Use the Related Event portlet to analyze patterns in groups and deploy correlation rules based on common event types between the groups. The system uses the most actionable event in the group as the parent event to be set by the correlation rule. By default, the most actionable event in the group is the most ticketed or acknowledged event. Before you deploy the correlation rule, you can change the parent event setting. A synthetic event is created with some of the properties of the parent event, and all the related events are grouped under this synthetic event.

404 IBM Netcool Operations Insight: Integration Guide Event groups are generated by scheduled runs of related event configurations. A default related event configuration is provided. You can create your own configurations and specify which historical data to analyze. For example, you can specify a custom time range, an event filter, and schedule. For more information about related events, see “Related events” on page 433. You can create a pattern based on the related event groups discovered by the related event analytic. The system can also suggest name patterns to you based on the related event groups discovered by the related event analytic. You can use an event in the group as the parent event to be set by the correlation rule, or create a synthetic event as the parent. You can also test the performance of a pattern before it is created to check the number of related events groups and events returned for a pattern.

Administering analytics configurations You can create and run configuration scans on demand or on a scheduled basis to generate analytics based on your event data.

Configure Analytics window The Configure Analytics window contains a list of existing event configurations, or reports. Use this window to view, create, modify, delete, run, or stop event configurations. Note: To access the Configure Analytics window, users must be assigned the ncw_analytics_admin role. You can use the Configure Analytics window to determine whether an event recurs and when it recurs most frequently. For example, an event occurs frequently at 9 a.m. every Monday. Knowledge of the type of event and the patterns of recurrence can help to determine the actions that are required to reduce the number of events. The Configure Analytics table displays the following columns of information for each event configuration. Name Specifies the unique event configuration name. Event Identity Specifies the database fields that identify a unique event in the database. Event seasonality runs on all events selected from the Event Identity drop-down list. If the Event Identity value is Using Global Settings, the Event Identity is set up in the configuration file. Seasonality Enabled Specifies whether the event configuration has seasonality event analytics enabled. This column displays one of the following values: • True: Seasonality analytics is enabled. • False: Seasonality analytics is not enabled. The column displays the value true if seasonality analytics is enabled. Related Event Enabled Specifies whether the event configuration has related event analytics enabled. This column displays one of the following values: • True: Related event analytics is enabled. • False: Related event analytics is not enabled. Seasonality Status Specifies the status of the seasonality event configuration. The column can display one of the following status icons: Waiting, Running, Completed, or Error. Related Event Status Specifies the status of the related event configuration. The column can display one of the following status icons: Waiting, Running, Completed, or Error. Start Time Specifies the inclusive start date of historical data for the event configuration. End Time Specifies the inclusive end date of historical data for the event configuration.

Chapter 7. Administering 405 Seasonality Phase Specifies the phase of the seasonality event configuration run. In total, this column displays five phases during the run of the seasonality event configuration. For example, when the seasonality event configuration completion phase occurs, the value Completed displays in the column. Seasonality phase progress Displays the progress of the seasonality event phase, expressed in terms of percentage. For example, when the seasonality event configuration completion phases finishes, the value 100% displays in the column. Related Event Phase Specifies the phase of the related event configuration run. In total, this column displays five phases during the run of the related event configuration. For example, when the related event configuration completion phase occurs, the value Completed displays in the column. Related Event Phase Progress Displays the progress of the related event phase, expressed in terms of percentage. For example, when the related event configuration completion phases finishes, the value 100% displays in the column. Scheduled Indicates whether the event configuration run is scheduled to run every x number of days, weeks, or months. The value Yes displays in the column if the event configuration run is scheduled. Relationship profile The entry in this column specifies the strength of the relationship between the events in the event groups that are determined by the algorithm. One value is specified. Strong Represents a high confidence level for relationships between events, but fewer events and fewer event groups. Your event groups might be missing weaker related events. Medium Represents medium confidence level for relationships between events, average number of events and fewer event groups. Your event groups might be missing weakly related events. Weak You see more events and more event groups but potentially more false positives. Important: Netcool/Impact does not support Arabic or Hebrew. Event Analytics users who are working in Arabic or Hebrew see some English text.

Setting the Impact data provider and other portlet preferences If there are multiple connections with the Impact data provider, you must specify which Impact data provider to use.

About this task If a single connection with the Impact data provider exists, then that connection is used to compile the list of seasonal reports and display them in a table. If there are multiple connections with the Impact data provider, you must edit your portlet preferences to select one of the options.

Procedure 1. To edit your portlet preferences, or as an administrator to edit the portlet defaults:

• To edit your portlet preferences, click Page Actions > Personalize Page > Widget > Personalize.

• To edit the portlet defaults of all users, click Page Actions > Edit Page > Widget > Edit. The Configure Analytics dialog box is displayed. 2. Select a data provider from the Data Provider drop-down list. 3. In the Bidi Settings tab, specify the settings for the display of bidirectional text.

406 IBM Netcool Operations Insight: Integration Guide Component direction Select the arrangement of items in the portlet, left-to-right, or right-to-left. The default setting uses the value that is defined for the page or the console. If the page and console both use the default setting, the locale of your browser determines the layout. Text direction Select the direction of text on the portlet. The default settings use the value that is defined for the page or the console. If the page and console both use the default setting, the locale of your browser determines the text direction. The Contextual Input setting displays text that you enter in the appropriate direction for your globalization settings. 4. To save your changes, complete the following steps. a) Select Save in the Configure Analytics dialog box. The Configure Analytics dialog box is closed. b) Select Save, which is in the upper right of the window.

Viewing current analytics configurations An Administrator can see the list of current analytics configurations and some basic information about the analytics (related events and seasonal events) related to those configurations.

About this task Event Analytics includes a default analytics configuration for you to run a basic configuration with default values. You can run, modify, or delete this analytics configuration. To view your analytics configurations, complete the following steps. This task assumes that you have logged into the Dashboard Application Services Hub as a user with the ncw_analytics_admin role.

Procedure 1. Start the Configure Analytics portlet. a) In the Dashboard Application Services Hub navigation menu, go to the Insights menu. b) Select Configure Analytics. 2. In the Configure Analytics portlet, a table presents a list of analytics configurations that are already configured. Scroll down the list to view all analytics configurations. The table automatically refreshes every 60 seconds and displays information for select column headings. • To view configuration parameters for a specific analytics configuration, select a configuration and then select the Modify Selected Configuration icon. • To view progress of the latest action that is taken for an analytics configuration, look at the content that is displayed in the following columns: Seasonality Status Specifies the status of the seasonality event configuration. The column can display one of the following status icons: Waiting, Running, Completed, or Error. Related Event Status Specifies the status of the related event configuration. The column can display one of the following status icons: Waiting, Running, Completed, or Error. Start Time Specifies the inclusive start date of historical data for the event configuration. End Time Specifies the inclusive end date of historical data for the event configuration. Seasonality Phase Specifies the phase of the seasonality event configuration run. In total, this column displays five phases during the run of the seasonality event configuration. For example, when the seasonality event configuration completion phase occurs, the value Completed displays in the column. Seasonality Phase Progress Displays the progress of the seasonality event phase, expressed in terms of percentage. For example, when the seasonality event configuration completion phases finishes, the value 100% displays in the column.

Chapter 7. Administering 407 Related Event Phase Specifies the phase of the related event configuration run. In total, this column displays five phases during the run of the related event configuration. For example, when the related event configuration completion phase occurs, the value Completed displays in the column. Related Event Phase Progress Displays the progress of the related event phase, expressed in terms of percentage. For example, when the related event configuration completion phases finishes, the value 100% displays in the column. • To view other details about the analytics configuration, look at the information displayed in other columns: Name Specifies the unique event configuration name. Event identity Specifies the database fields that identify a unique event in the database. Event seasonality runs on all events selected from the Event Identity drop-down list. Scheduled Advises whether the analytics configuration is scheduled to query the historical events database. Seasonality Enabled Specifies whether the event configuration has seasonality event analytics enabled. This column displays one of the following values: – True: Seasonality analytics is enabled. – False: Seasonality analytics is not enabled. Related Event Enabled Specifies whether the event configuration has related event analytics enabled. This column displays one of the following values: – True: Related event analytics is enabled. – False: Related event analytics is not enabled. Relationship Profile The entry in this column specifies the strength of the relationship between the events in the event groups that are determined by the algorithm. One value is specified. Strong Represents a high confidence level for relationships between events, but fewer events and fewer event groups. Your event groups might be missing weaker related events. Medium Represents medium confidence level for relationships between events, average number of events and fewer event groups. Your event groups might be missing weakly related events. Weak You see more events and more event groups but potentially more false positives.

Creating a new or modifying an existing analytics configuration An Administrator can create a new analytics configuration or modify an existing analytics configuration. You choose the analytics type (related events, seasonal events, or both) you want to run during the create new analytics configuration operation.

Before you begin When you modify an existing analytics configuration, you cannot change the following parameter fields in the dialog box: • General > Name • General > Analytics Type • Advanced > Override global event identity > Event identity

Procedure 1. Start the Configure Analytics portlet. See “Viewing current analytics configurations” on page 407.

408 IBM Netcool Operations Insight: Integration Guide 2. Select the Create New Configuration icon to create a new analytics configuration, or highlight an existing analytics configuration and select the Modify Selected Configuration icon to modify an existing analytics configuration. The UI displays a dialog box that contains parameter fields for the new or existing analytics configuration. 3. Populate the parameter fields in the General tab of the dialog box with the details applicable to the analytics configuration. Name Enter the name of the analytics configuration. The name should reflect the type of analytics configuration you are creating. For example, TestSeasonality1 and TestRelatedEvents1 might be names you assign to analytics configurations for seasonality events and related events. The name for an analytics configuration must be unique and not contain certain invalid characters. The invalid character list is the list of characters listed in the webgui_home/etc/ illegalChar.prop file. Analytics Type Select Seasonal event analytics, Related event analytics, or both. Note: If you select Related event analytics the event pattern processing will automatically be performed on your data. Pattern processing runs against all specified event types: the default event type, and any additional event types that were specified when the system was set up. If you want to limit the additional event types used in pattern processing for this configuration, then click Advanced > Override global event type. Event identity From the drop-down list, select the database fields that identify a unique event in the database. Event seasonality runs on all events that are selected from the Event Identity drop-down list. For information about how to change the fields in the drop-down list, see “Changing the choice of fields for the Event Identity” on page 413. Date Range Select either RelativeFixed date range or FixedRelative date range Relative: Enter the time frame that is to be included in the analytics configuration. The relative time frame is measured in Months, Weeks, or Days. Fixed: The Start date and End date parameter fields are active. Enter an inclusive Start date and End date for the analytics configuration. Fixed date range: The Start date and End date parameter fields are active. Enter an inclusive Start date and End date for the analytics configuration. Relative date range: Enter the time frame that is to be included in the analytics configuration. Run every To schedule the analytics configuration to run at specific time intervals, enter how frequently the configuration is to run. When you enter a value greater than zero, the analytics configuration becomes a scheduled configuration. Note: This option applies to the relative date range only. You cannot apply this option to the fixed date range. Filter Detail any filters that are applicable to the analytics configuration. For example, enter Summary NOT LIKE '%maintenance%'. 4. Populate the parameter fields in the Related Events tab of the dialog box with the details applicable to the analytics configuration. Relationship Profile Select the strength of the relationship between the events in an analytics configuration. If this value is set to Strong, there is more confidence in the result and less number of groups produced. Automatically deploy rules discovered by this configuration Select this option to automatically deploy rules that are discovered by this analytics configuration.

Chapter 7. Administering 409 Relationship Profile Select the strength of the relationship between the events in an analytics configuration. If this value is set to Strong, there is more confidence in the result and less number of groups produced. Automatically deploy rules discovered by this configuration Select this option if you want to automatically deploy rules that are discovered by this analytics configuration. 5. Populate the parameter field in the Advanced tab of the dialog box with the details applicable to the analytics configuration. You can use the defined event identities, select the Override global event identity to identify unique events in the database. Override global event identity Select this option to enable the Event identity drop-down list. When the Override global event identity check box is selected, you cannot create a pattern from a configuration. However, you can deploy a related events group. Event identity From the Event identity drop-down list, select the database fields that identify a unique event in the database. Event seasonality runs on all events that are selected from the Event Identity drop- down list. For information about how to change the fields in the drop-down list, see “Changing the choice of fields for the Event Identity” on page 413. Override global event type Select this option to enable the Event type drop-down list. Event type Select one or more event type names from the Event type drop down list, against which to run event pattern processing. Note: If none of the event type names are selected, then all of these event types will be used in event pattern processing. Override global resource Select this option to enable the Resource drop-down list. Resource From the Resource drop-down list, select the database fields that identify a resource in the database. 6. Click either Save to save the report without running, or click Save & Run to save and run the report. You can also cancel the operation by clicking Cancel.

Results • If no errors are found by the system validation of the analytics configuration content, the new or updated analytics configuration and its parameters are displayed in the table. • If errors are found by the system validation of the analytics configuration content, you are prevented from saving the configuration and you are asked to reset the invalid parameter.

Scope-based groups Events in a scope-based group are grouped together because they share a common attribute, such as a resource. Create an event policy to set ScopeID for events that match your defined filter. Scope-based event grouping is activated for events that match the filter, based on the ScopeID that you specify. For more information about creating a scope-based grouping policy, see OMNIbus documentation: About scope- based grouping events with Event Analytics .

410 IBM Netcool Operations Insight: Integration Guide Manually running an unscheduled analytics configuration An Administrator can manually run an unscheduled analytics configuration at any stage. You choose the analytics type (related events or seasonal events) you want to run during the run unscheduled analytics operation.

Before you begin The analytics configuration that you try to manually run cannot have a Related Event Status or Seasonality Status of Running. If you try to manually run an analytics configuration that is already running, the GUI displays a warning message. Note: Sequentially running reports can take longer to complete than parallel running reports in previous releases.

Procedure 1. Start the Configure Analytics portlet. See “Viewing current analytics configurations” on page 407. 2. Within the list of analytics configurations that are displayed, select one configuration. 3. From the toolbar, click the Run Selected Configuration icon. Some columns are updated for your selected analytics configuration. The icon in the Seasonality Status or Related Event Status column changes to a time glass icon. The text in the Seasonality Phase or Related Event Phase column changes to Waiting to Start. The percentage in the Seasonality Phase Progress or Related Event Phase Progress column starts at 0% and changes to reflect the percentage complete for the phase.

Results The analytics configuration is put into the queue for the scheduler to run. As the analytics configuration is running, the following columns are updated to communicate the progress of the run: • Seasonality Status or Related Event Status • Seasonality Phase or Related Event Phase • Seasonality Phase Progress or Related Event Phase Progress

What to do next If you want to stop an analytics configuration that is in Running status, from the toolbar click the Stop Selected Configuration icon.

Stopping an analytics configuration You can stop an analytics configuration that is running.

About this task If you create an analytics configuration and select to run the configuration, you might realize that some configuration values are incorrect while the configuration is still running. In this situation you can choose to stop the analytics configuration instead of deleting the configuration or waiting for the configuration run to complete. To stop a running analytics configuration, complete the following steps.

Procedure 1. Start the related events configuration portlet, see “Viewing current analytics configurations” on page 407. 2. Within the list of analytics configurations that are displayed, select the running configuration. 3. From the toolbar, click the Stop Selected Configuration icon.

Chapter 7. Administering 411 Deleting an analytics configuration Analytics configurations can be deleted individually, regardless of their status.

Procedure 1. Start the Configure Analytics portlet, see “Viewing current analytics configurations” on page 407. 2. Select the name of the analytics configuration that you want to delete and from the toolbar click the Delete Selected Configuration icon. 3. Within the confirmation dialog that is displayed, select OK. If you attempt to delete an analytics configuration with one or more rules created for it, a text warning dialog box appears with the current rules status for that analytics configuration. The following example illustrates text that the warning dialog box can contain:

Configuration EventAnalytics_Report_1 contains the following rules:

Seasonality Rules: 0 watched rules, 1 active rules, 0 expired rules and 0 archived

Related Event Rules: 0 watched rules, 0 active rules, 0 expired rules and 0 archived

Delete the rules manually before deleting the configuration.

As the message indicates, manually delete the rule or rules associated with the specified analytics configuration before deleting the analytics configuration. In the example, the one active rule associated with the analytics configuration called EventAnalytics_Report_1 would need to be deleted first.

Results • The table of analytics configurations refreshes, and the deleted configuration no longer appears in the list of analytics configurations. • Deleting the analytics configuration does not delete the related results if the results are in the Deployed or Expired state. However, deleting the analytics configuration does delete the related results that are in the New or Archived state. • You are unable to reuse the name of a deleted analytics configuration until all related event groups that contain the name of the deleted configuration are deleted from the system.

Changing the expiry time for related events groups You can modify the expiry time for Active related events groups. When the expiry time is reached, the expired groups and related events display in the View Related Events portlet, within the Expired tab.

About this task Groups that contain an expired group or pattern continue to correlate. The system administrator should review the group and expired group or event. By default the related events expiry time is 6 months. Complete the following steps to change the related events expiry time. Note: Watched related events groups do not expire.

Procedure 1. Log in to the Netcool/Impact UI. 2. Select the Related Events project. 3. Select the Policies tab. 4. Within the Policies tab, select to edit the RE_CONSTANTS policy. 5. Within the RE_CONSTANTS policy, change the value for the RE_EXPIRE_TIME constant. Enter your new value in months.

412 IBM Netcool Operations Insight: Integration Guide 6. Save the policy.

Results This change takes effect only with newly discovered related event groups in the Active tabs.

What to do next If you want to configure the expiry time so that deployed groups never expire, change the value for the RE_EXPIRE_TIME constant to 0 and save the policy for this change to take effect. You do not need to restart the Impact Server. If you want to enable the expiry time at any stage, set this variable back to a value greater than 0.

Changing the choice of fields for the Event Identity You can change which fields, from your event history database, are available for selection as the Event Identity.

About this task An Event Identity is a database field that identifies a unique event in the event history database. When you configure a related events configuration, you select database fields for the Event Identity from a drop-down list of available fields. Through configuration of an exception list within Netcool/ Impact, you can change the fields available for selection in the drop-down list. Fields included in the exception list do not appear in the Configure Analytics portlet. The Netcool/Impact design displays the following default fields in the Event Identity drop-down list Alert Group Alert Key Node Summary Identifier LOCALNODEALIAS LOCALPRIOBJ LOCALROOTOBJ LOCALSECOBJ REMOTENODEALIAS REMOTEPRIOBJ REMOTEROOTOBJ REMOTESECOBJ If you have other database fields that are not in the exception list, these other fields also appear in the drop-down list. Complete the following steps to modify the exception list.

Procedure 1. Log in to Netcool/Impact. 2. From the list of available projects, select the RelatedEvents project. 3. Select the Policies tab. Within this tab, select and edit the RE_CONSTANTS policy. 4. Update the RE_OBJECTSERVER_EXCLUDEDFIELDS variable. Add or remove fields from the static array. Case sensitivity does not matter. 5. Save the policy. 6. Run the policy. If there is an error, check your syntax.

Results The changes occur when the policy is saved. No restart of Netcool/Impact is needed.

Chapter 7. Administering 413 Changing the discovered groups You can modify the discovered groups for related events groups.

About this task By default all unallocated groups are shown in the group sources for a configuration. Complete the following steps to change the discovered groups.

Procedure 1. Log in to the Netcool/Impact UI. 2. Select the Related Events project. 3. Select the Policies tab. 4. Within the Policies tab, select to edit the NOI_CONSTANTS policy. 5. Within the NOI_CONSTANTS policy, change the value for the RE_DISCOVERED_GROUPS variable. 6. Save the policy. 7. Run the noi_derby_upgradefp15.sql file to upgrade or re-run the configuration to change the existing data in the views. The default value after upgrade is: Unallocated Groups

Validating analytics and deploying rules Review and validate the analytics to create rules to apply to the live event stream that the operators see in the Netcool/OMNIbus Event Viewer.

View Seasonal Events portlet The View Seasonal Events portlet contains a list of configurations, a list of seasonal events, and seasonal event details. In addition to viewing the seasonal events, you can mark events as reviewed and identify the events that were reviewed by others. The View Seasonal Events portlet displays the following default columns in the group table: Configuration Displays a list of the seasonal event configurations. Event Count Displays a count of the number of seasonal events for each seasonal event configuration. Node Displays the managed entity from which the seasonal event originated. The managed entity could be a device or host name, service name, or other entity. Summary Displays the description of the seasonal event. Alert Group Displays the Alert Group to which the seasonal event belongs. Reviewed by Displays the list of user names of the users who reviewed the seasonal event. Confidence Level Displays icons and text based on the level of confidence that is associated with the seasonal event. The confidence level is displayed as high, medium, or low, indicating that an event has a high, medium, or low seasonality. Maximum Severity Displays the maximum severity of the events that contribute to the seasonality of the selected seasonal event. Rule Created Displays the name of the seasonal event rule that was created for the seasonal event.

414 IBM Netcool Operations Insight: Integration Guide Related Events Count Displays a count of the number of related events for each seasonal event. Obsolete Seasonal Event Indicates whether this is an obsolete seasonal event. By default, if the same seasonality configuration is run a second time and a seasonal event from the previous run is not found, that seasonal event is deleted. However, if that seasonal event has at least one associated rule then it is not deleted, but it is marked as an Obsolete seasonal event. Seasonal events from the previous run that are also found in the current run, are current events, and are not marked as obsolete.

Table 62. Obsolete and current events Found in previous run Found in current run Has associated rule Seasonal event is... Yes No No deleted Yes No Yes marked as Obsolete event Yes Yes Doesn't matter current

First Occurrence Displays the date and time when the seasonal event first occurred. The time stamp is configurable by users and is displayed in the following format:

YYYY-MM-DD HH:MM:SS

For example:

2012-10-12 09:52:58.0

Viewing a list of seasonal event configurations and events You can view a list of the seasonal event configurations and seasonal events in the View Seasonal Events portlet.

Before you begin To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To view a list of the seasonal event configurations and seasonal events, complete the following steps. 1. Open the View Seasonal Events portlet. 2. By default, the Seasonal Event configurations are listed in the Configuration table. 3. To view a list of seasonal events associated with the configurations, select one of the following options. a) Select All to view of list of the seasonal events for all of the configurations. b) Select a specific configuration to view a list of the seasonal events for that configuration. The seasonal events are listed in the Summary table.

Results The seasonal event configurations and associated seasonal events are listed in the View Seasonal Events portlet.

Chapter 7. Administering 415 Reviewing a seasonal event You can mark or unmark a seasonal event as reviewed.

About this task The Reviewed by column in the View Seasonal Events portlet displays the user name of the reviewer.

Procedure To update the review status of an event, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL in the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Mark as Reviewed or Unmark as Reviewed. Each seasonal event can be reviewed by multiple users. The reviewers are listed in the Reviewed by column.

Results The selected seasonal event is marked or unmarked as Reviewed. The Reviewed by column is updated to display the user name of the reviewer.

Sorting columns in the View Seasonal Events portlet You can sort the columns in the View Seasonal Events portlet to organize the displayed data.

Before you begin To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

About this task The rows in the View Seasonal Events portlet are sorted by the configuration name. You can change the order of the rows by using the columns to sort the data. Sorted columns are denoted by an upwards-pointing arrow or downwards-pointing arrow in the column header, depending on whether the column is sorted in ascending or descending order.

Procedure To sort the rows by column, complete the following steps. 1. Open the View Seasonal Events portlet. 2. To sort single columns, complete the following steps. a) To sort a column, click the column header once. The rows are sorted in ascending order. b) To sort in descending order, click the column header again. c) To unsort the column, click the column header a third time. 3. To sort multiple columns, complete the following steps. a) Sort the first column as a single column. b) Move the mouse pointer over the column header of the next column you want to sort. Two icons are displayed. One is a standard sorting icon and the other is a nested sorting icon. The nested sorting icon has a number that represents how many columns are sorted as a result of selecting the option. For example, if this is the second column that you want to sort the number 2 is displayed. c) Click the nested sorting icon. The column is sorted with regard to the first sorted column. Tip: When you move the mouse pointer over the nested sorting icon, the hover help indicates that it is a nested sorting option. For example, the hover help for the icon displays "Nested Sort - Click to sort Ascending". The resulting sort order is ascending with regard to the previous columns on which a sorting order was placed.

416 IBM Netcool Operations Insight: Integration Guide d) To reverse the order of the nested sort, click the nested sorting icon again. The order is reversed and the nested sorting icon changes to the remove sorting icon. e) To remove nested sorting from a column, move the mouse pointer over the column header and click the Do not sort icon. Note: In any sortable column after nested sorting is selected, when you click the standard sorting icon, it becomes the only sorted column in the table and any existing sorting, including nested is removed.

Results Sorted columns are marked with an upwards-pointing arrow or a downwards-pointing arrow in the column header to indicate whether the column is sorted in ascending or descending order. The sorting is temporary and is not retained.

Deleting obsolete seasonal events You can delete obsolete seasonal events directly from the View Seasonal Events portlet.

About this task By default, if the same seasonality configuration is run a second time and a seasonal event from the previous run is not found, that seasonal event is deleted. However, if that seasonal event has at least one associated rule then it is not deleted, but it is marked as an Obsolete seasonal event. Seasonal events from the previous run that are also found in the current run, are current events, and are not marked as obsolete.

Procedure 1. Open the View Seasonal Events portlet. 2. Sort the table by the Obsolete Seasonal Event column so that any events that have this column set to true come to the top of the table. Events with this column set to true are obsolete seasonal events. 3. Review the rule or rules associated with each obsolete seasonal event. Then proceed as follows:

Table 63. Deciding whether to delete an obsolete seasonal event What do you want to do? Then... If you want the associated rule or rules to Do not delete the obsolete seasonal event continue to be active If you no longer require the associated rule or Delete the obsolete seasonal event: rules to continue to be active a. Right-click the row containing the relevant obsolete seasonal event. b. Click Delete Obsolete Seasonal Event.

Exporting all seasonal events for a specific configuration to Microsoft Excel You can export all seasonal events for a specific configuration to a Microsoft Excel spreadsheet from a supported browser.

Before you begin You view seasonal events for one or more configurations in the View Seasonal Events portlet. To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Chapter 7. Administering 417 Procedure To export all seasonal events for a specific configuration to a Microsoft Excel spreadsheet, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration from the configuration table. 3. Click the Export Seasonal Events button in the toolbar. After a short time, the Download export results link displays. 4. Click the link to download and save the Microsoft Excel file.

Results The Microsoft Excel file contains a spreadsheet with the following tabs: • Report Summary: This tab contains a summary report of the configuration that you selected. • Seasonal Events: This tab contains the seasonal events for the configuration that you selected. • Export Comments: This tab contains any comments relating to the export for informational purposes (for example, if the spreadsheet headers are truncated, or if the spreadsheet rows are truncated).

Exporting selected seasonal events for a specific configuration to Microsoft Excel You can export selected seasonal events for a specific configuration to a Microsoft Excel spreadsheet from a supported browser.

Before you begin You view seasonal events for one or more configurations in the View Seasonal Events portlet. To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To export selected seasonal events for a specific configuration to a Microsoft Excel spreadsheet, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration from the configuration table. 3. Select multiple seasonal events by using the Crtl key and select method. (You can also select multiple seasonal events by using the click and drag method.) 4. After selecting multiple seasonal events, right click on one of the selected seasonal events and select the Export Selected Events button in the toolbar. After a short time, the Download export results link displays. 5. Click the link to download and save the Microsoft Excel file.

Results The Microsoft Excel file contains a spreadsheet with the following tabs: • Report Summary: This tab contains a summary report of the configuration that you selected. • Seasonal Events: This tab contains the seasonal events for the configuration that you selected. • Export Comments: This tab contains any comments relating to the export for informational purposes (for example, if the spreadsheet headers are truncated, or if the spreadsheet rows are truncated).

Seasonal Event Rules You can use seasonal event rules to apply an action to specific events. You can choose to apply actions to a selected seasonal event, or to a seasonal event and some or all of its related events. You can use seasonal event rules to apply actions to suppress and unsuppress an event, to modify or enrich an event, or to create an event if the selected event does not occur when expected.

418 IBM Netcool Operations Insight: Integration Guide Creating a seasonal event rule You can create a watched or deployed seasonal event rule from the View Seasonal Events portlet.

Before you begin To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To create a seasonal event rule in the View Seasonal Events portlet, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL in the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Create Rule. 5. Input a unique rule name in the Create Rule window. 6. Input the following rule criteria for events and actions in the Create Rule window. a) To apply rule actions to an event and time condition, see “Applying rule actions to an event and time condition” on page 419. b) To apply actions when an event occurs, see “Applying actions when an event occurs” on page 420. c) To Applying actions when an event does not occur, see “Applying actions when an event does not occur” on page 421. 7. To save the seasonal event rule, choose one of the following criteria. a) Select Watch to monitor the rule's performance before it is deployed. b) Select Deploy to activate the rule.

Results A seasonal event rule is created. To view a list of current seasonal event rules, open the Seasonal Event Rules portlet.

Applying rule actions to an event and time condition To create a seasonal event rule, you must specify the selected events or time conditions, or both in the Create Rule or Modify Existing Rule window.

Before you begin Create or modify an existing seasonal event rule. To create a seasonal event rule, see “Creating a seasonal event rule” on page 419. To modify an existing seasonal event rule, see “Modifying an existing seasonal event rule” on page 424.

About this task The seasonal event that is selected by default in the Event Selection pane is the seasonal event from which the Create Rule or Modify Existing Rule window was opened. Note: A seasonal event rule suppresses events when they occur for a deployed related event group. The seasonal rule actions do not apply to the synthetic parent event that is created. Note: You can create a seasonal event rule to unsuppress an event or alarm. This rule has no actions if there are no suppressed alarms.

Procedure To specify the selected events and time conditions, complete the following steps in the Create Rule window. 1. To select the all, or one or more of the events that are related to the seasonal event, complete the following steps. a) To select all of the related events, select the Select all related events check box.

Chapter 7. Administering 419 b) To edit or select one or more of the related events, click Edit Selection, select one or more related events. 2. To save the related eventselection, click OK 3. To select a time condition, complete the following steps. a) Select one of the following time condition filter conditions. AND Select AND to apply rule actions to each of the selected the time conditions. OR Select OR to apply rule actions to individual time conditions. b) Select Minute of the Hour, Hour of the Day, Day of Week, or Day of Month from the drop-down menu. c) Select Is or Is Not from the drop-down menu. d) Select the appropriate minute, hour, day, or date from the drop-down menu. You can select multiple values from this drop-down menu. Note: High, medium, and low seasonality labels are applied to this time selection drop-down menu to indicate the seasonality of the events occurring at that time. 4. Click the add button to add another time condition. 5. To save the event selection and time conditions, choose one of the following criteria. a) Select Watch to monitor the rule's performance before it is deployed. b) Select Deploy to activate the rule.

Results The seasonal event rule conditions are applied to the selected events, time conditions, or both.

Applying actions when an event occurs You can apply specific actions to occur when an event occurs in a specific time window.

Before you begin Create or modify an existing seasonal event rule. To create a seasonal event rule, see “Creating a seasonal event rule” on page 419. To modify an existing seasonal event rule, see “Modifying an existing seasonal event rule” on page 424. Note: To suppress or unsuppress events you must update the noi_default_values file. For more information about the noi_default_values file, see “Configuring event suppression” on page 283.

About this task The events that are selected in the Event Selection pane are the events to which the action is applied when the event occurs in a specific time window. For more information about selecting events, see “Applying rule actions to an event and time condition” on page 419. You can suppress events that do not require you to take any direct action, and unsuppress the events after a specified time period. You can set a column value on an event occurrence and again set it again after a specified time period.

Procedure To specify the actions to apply when an event occurs, complete the following steps in the Actions When Event(s) Occurs in Specified Time Window(s) pane. 1. To suppress an event so that no action is taken when it occurs, complete the following steps. a) Select the Suppress event(s) check box. b) (Optional) To select a column value, see step 3 below.

420 IBM Netcool Operations Insight: Integration Guide 2. To unsuppress an event after an action occurs, complete the following steps. a) To select the time after the action occurs to unsuppress the event, select a number from the Perform Action(s) After list, or type an entry in the field. Select Seconds, Minutes, or Hours from the Perform Action(s) After drop-down list. b) Select the Unsuppress event(s) check box. c) (Optional) To select a column value, see step 4 below. 3. To set the column value after an action occurs, complete the following steps. a) Select the Set Column Values check box and click the Set Column Value button for Perform Action(s) on Event Occurrence. b) In the Set Column Value page, input values for the ObjectServer columns. c) To save the column values, click Ok. 4. To reset the column value after a specified time period, complete the following steps. a) To specify a time period, select a number from the Perform Action(s) After list, or type an entry in the field. Select Seconds, Minutes, or Hours from the Perform Action(s) After drop-down list. b) Select the Set Column Values check box and click the Set Column Value button for Perform Action(s) After. c) In the Set Column Value page, input values for the ObjectServer columns. d) To save the column values, click Ok. 5. To save the seasonal event rule, choose one of the following options. a) Select Watch to monitor the rule's performance before it is deployed. b) Select Deploy to activate the rule.

Results The action to be applied to a rule that occurs in a specific time window is saved.

Applying actions when an event does not occur You can apply specific actions to occur when an event does not occur in a specific time window.

Before you begin Create or modify an existing seasonal event rule. To create a seasonal event rule, see “Creating a seasonal event rule” on page 419. To modify an existing seasonal event rule, see “Modifying an existing seasonal event rule” on page 424.

About this task The events that are selected in the Event Selection pane are the events to which the action is applied if the event does not occur in a specific time window. For more information about selecting events, see “Applying rule actions to an event and time condition” on page 419.

Procedure To specify the actions to apply when an event does not occur, complete the following steps in the Actions When Event(s) Does Not Occur in Specified Time Window(s) pane. 1. To select the time after which the event does not occur to apply the action, complete the following steps. a) Select a number from the Perform Action(s) After list, or type an entry in the field. b) Select Seconds, Minutes, or Hours from the Perform Action(s) After drop-down list. 2. To create a synthetic event on a non-occurrence, select the Create event check box and click Create event. 3. To define the event, complete the fields in the new Create Event window. 4. To save the synthetic event, click Ok. 5. To save the seasonal event rule, choose one of the following options.

Chapter 7. Administering 421 a) Select Watch to monitor the rule's performance before it is deployed. b) Select Deploy to activate the rule.

Results The action to be applied to a rule that does not occur in a specific time window is saved.

Seasonal event rule states Seasonal event rules are grouped by state in the Seasonal Event Rules portlet.

Seasonal event rule states Seasonal event rules are grouped in the following states. Watched A watched seasonal event rule is not active. You can watch a seasonal event rule to monitor how the rule performs before you decide whether to deploy it. Watched seasonal event rules take no actions on events. It is used to collect statistics for rule matches for incoming events. Active A deployed seasonal event rule is active. Active seasonal event rules take defined actions on live events. Expired An expired seasonal event rule remains active. If triggered, the seasonal event rule takes defined actions on live events. The default expiry time is 6 MONTHS. To ensure that seasonal event rules are valid, regularly review the state and performance of the rules. You can customize the expiry time of a seasonal event rule. For more information, see “Modifying the default seasonal event rule expiry time” on page 422. Archived An archived seasonal event rule is not active. You can choose to archive a watched, active, or expired seasonal event rule. To delete a seasonal event rule, it must first be archived. For more information about changing the state of a seasonal event rule, see “Modifying a seasonal event rule state” on page 425.

Modifying the default seasonal event rule expiry time You can change the default seasonal event rules expiry time to a specific time or choose no expiry time to ensure that a seasonal event rule does not expire.

About this task To ensure that seasonal event rules are valid, you should regularly review and update the state of the rules.

Procedure To modify or remove the default seasonal event rules expiry time, complete the following steps. 1. To generate a properties file from the command line interface, use the following command:

./nci_trigger SERVER / NOI_DefaultValues_Export FILENAME directory/filename

Where SERVER The server where Event Analytics is installed. The user name of the Event Analytics user.

422 IBM Netcool Operations Insight: Integration Guide The password of the Event Analytics user. directory The directory where the file is stored. filename The name of the properties file. For example:

./nci_trigger NCI impactadmin/impact NOI_DefaultValues_Export FILENAME /space/noi_default_values

2. To modify the default seasonal event rules expiry time, edit the default values of the following parameters. seasonality.rules.expiration.time.value=6 The number of days, hours, or months after which the seasonal event rule expires. The default value is 6. seasonality.rules.expiration.time.unit=MONTH The seasonal event rules expiry time frequency. The default frequency is MONTH. The following time units are supported: • HOUR • DAY • MONTH 3. To import the modified properties file into IBM Tivoli Netcool/Impact, use the following command:

./nci_trigger SERVER / NOI_DefaultValues_Configure FILENAME directory/filename

For example:

./nci_trigger NCI impactadmin/impact NOI_DefaultValues_Configure FILENAME /space/noi_default_values

Results The default seasonal event rules expiry time is modified.

Viewing performance statistics for seasonal event rules You can view performance statistics for seasonal event rules in the Seasonal Event Rules portlet, within the Watched, Active, or Expired tabs of the group table.

Columns in the group table The group table in the View Seasonal Events portlet displays the seasonal event rules that you have created. The left-side of the group table has these columns: Configuration: Displays the list of configuration names for which a seasonal event rule has been created. Rule Count: Displays the number of seasonal event rules created for each particular configuration. This number also indicates the total number of seasonal event rules created for all configurations under the All item. Rule Name: Displays the name of the seasonal event rule. Last Run: Displays the date and time when the seasonal event rule was last executed. If the column is blank, the seasonal event rule has not been executed. Deployed: Displays the date and time when the seasonal event rule was deployed. The term deployed means that the seasonal event rule is available for use, is actively accumulating rule statistics, and any actions applied to the rules are being performed.

Chapter 7. Administering 423 Note: For the Last Run and Deployed columns, the date is expressed as month, day, year. Likewise, the time is expressed as hours:, minutes:, seconds. The time also indicates whether AM or PM. For example: Apr 13, 2015 4:45:17 PM.

Performance statistics in the group table

The performance statistics are displayed in the following columns of the group table in the Watched, Active, or Expired tabs in the Seasonal Event Rules portlet. Suppressed Events: Displays the total number of events that the seasonal event rule suppressed since the rule was deployed. Unsuppressed Events: Displays the total number of events that the seasonal event rule unsuppressed since the rule was deployed. Enriched/Modified Events: Displays the total number of events that the seasonal event rule enriched or modified since the rule was deployed. Generated Events on Non-occurrence: Displays the total number of events that the seasonal event rule generated since the rule was deployed for events that do not meet the event selection criteria (that is, for those matching events that fall outside of the event selections condition for the rule).

Reset performance statistics You can reset performance statistics to zero for a group in the Watched, Active, or Expired tabs. To reset performance statistics, right-click on the seasonal event rule name (from the Rule Name column) and from the menu select Reset performance statistics. A message displays indicating that the operation will reset statistics data for the selected seasonal event rule. The message also indicates that you will not be able to retrieve this data. Click OK to continue with the operation or Cancel to stop the operation. A success message displays after you select OK. Resetting performance statistics to zero for a seasonal event rule also causes the following columns to be cleared: Last Run and Deployed. Note that performance statistics are not collected for the Archived tab. When a rule is moved between states, the performance statistics are reset. Every time an action is triggered by the rule the performance statistics increase.

Modifying an existing seasonal event rule You can modify an existing seasonal event rule to update or change the event selection criteria or actions.

Before you begin To access the Seasonal Event Rules portlet, users must be assigned the ncw_analytics_admin role.

Procedure 1. Open the Seasonal Event Rules portlet. The Seasonal Event Rules portlet lists the seasonal event rules configuration in the table on the left side, and the seasonal event rules are listed in the table in the right side. 2. Select the rule that you want to modify from the rule table. 3. Right click and select Edit Rule. 4. Modify the event selection criteria or actions in the Modify Existing Rule window. 5. To save the seasonal event rule, choose one of the following criteria. a) Select Watch to monitor the rule's performance before it is deployed. b) Select Deploy to activate the rule.

Results The seasonal event rule is modified. To view a list of current seasonal event rules, open the Seasonal Event Rules portlet.

424 IBM Netcool Operations Insight: Integration Guide Viewing seasonal event rules grouped by state You can view seasonal event rules grouped by state in the Seasonal Event Rules portlet.

Before you begin To access the Seasonal Event Rules portlet, users must be assigned the ncw_analytics_admin role.

Procedure To view seasonal event rules grouped by state, complete the following steps. 1. Open the Seasonal Event Rules portlet. The Seasonal Event Rules portlet lists the seasonal event rules configuration in the table on the left side, and the seasonal event rules are listed in the table in the right side. 2. Select the seasonal event rule state that you want to view from the status tabs. The seasonal event rules are stored in tabs that relate to their status. For example, to view a list of the active seasonal event rules configurations and rules, select the Active tab.

Results The seasonal event rules configurations and rules for the chosen status are listed in the Seasonal Event Rules portlet.

Modifying a seasonal event rule state You can change the state of a seasonal event rule to watched, active, or archived from the Seasonal Event Rules portlet.

Before you begin To access the Seasonal Event Rules portlet, users must be assigned the ncw_analytics_admin role.

About this task The seasonal event rules are stored in tabs that relate to their state. The total number of rules is displayed on the tabs. For example, when you Archive a Watched rule, the rule moves from the Watched tab to the Archived tab in the Seasonal Event Rules portlet and the rules total is updated. Performance statistics about the rule are logged. You can use performance statistics to verify that a deployed rule is being triggered and that a monitored rule is collecting statistics for rule matches for incoming events. Performance statistics are reset when you change the state of a seasonal event rule.

Procedure To change the state of a seasonal event rule in the Seasonal Event Rules portlet, complete the following steps. 1. Open the Seasonal Event Rules portlet. The Seasonal Event Rules portlet lists the seasonal event rules configuration in the table on the left side, and the seasonal event rules are listed in the table in the right side. 2. To change the state of seasonal event rule, complete one of the following actions: a) To change the state of a watched seasonal event rule, select the Watched tab. Select a rule from the rule table. To change the state of the rule right-click the rule and select Deploy or Archive. b) To change the state of an active seasonal event rule, select the Active tab. Select a rule from the rule table. To change the state of the rule right-click the rule and select Watch or Archive. c) To change the state of an expired seasonal event rule, select the Expired tab. Select a rule from the rule table. To change the state of the rule right-click the rule and select Validate, Watch, or Archive. d) To change the state of an archived seasonal event rule, select the Archived tab. Select a rule from the rule table. To change the state of the rule right-click the rule and select Watch or Deploy.

Chapter 7. Administering 425 Results The seasonal event rule state is changed from its current state to its new state. The rule totals are updated to reflect the new seasonal event rule state.

Applying rule actions to a list of events You can apply defined actions to a list of events while you create a seasonal event rule.

Before you begin To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

About this task One of the events in the list on the Related Event Selection window is the seasonal event from which you launched the Create Rule dialog box. When the rule you created is fired, the rule is fired on the seasonal event and the related events that you selected. Because the rule is fired on the seasonal event, it is not possible for you to deselect this seasonal event from the list of related events displayed in the Related Event Selection window.

Procedure To select a list of events to which the defined action applies, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL in the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Create Rule. 5. To choose all related events: a) In the Event Selection pane of the Create Rule page, click the Select all related events checkbox. 6. Or, to choose one or more related events: a) In the Event Selection pane of the Create Rule page, click the Edit Selection... control button. The Related Event Selection window displays. Note that the seasonal event from which you launched the Create Rule dialog box has a check mark that you cannot deselect. b) Select one or more related events from the list displayed in the Related Event Selection window. c) Click OK. 7. To save your changes, choose one of the following options: a) Select Watch to monitor the rule's performance before it is deployed. b) Select Deploy to activate the rule.

Results The updated seasonal event rule is saved and the defined actions are applied to the selected related events.

Setting the column value for an event You can set the column value for an event when you set the actions for a rule.

Before you begin To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To set the column value, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL in the configuration table. 3. Select a seasonal event from the events table.

426 IBM Netcool Operations Insight: Integration Guide 4. Right-click the seasonal event and select Create Rule. 5. In the Actions When Event(s) Occurs in Specific Time Window(s) pane, select from the following options. a) To set the column value to suppress an event, select the Set Column Values check box and click the Set Column Value button for Perform Action(s) on Event Occurrence. b) To set the column value to unsuppress an event, select the Set Column Values check box and click the Set Column Value button for Perform Action(s) After. 6. In the Set Column Value page, input values for the ObjectServer columns. a) You can add or remove columns by using the plus and minus buttons. 7. To save the column values, click Ok. 8. To save the seasonal event rule, choose one of the following options. a) Select Watch to monitor the rule's performance before it is deployed. b) Select Deploy to activate the rule.

Results The seasonal event rule that modifies the column values is saved.

Seasonal Event Graphs The seasonal event graphs display bar charts and confidence level event thresholds for seasonal events. The Seasonal Event Graphs portlet consists of four charts: Minute of the hour The minute or minutes of the hour that the event occurs. Hour of the day The hour or hours of the day that the event occurs. Day of the week The day or days of the week that the event occurs. Day of the month The date or dates of the month that the event occurs. The confidence level of the data in the charts is displayed in three ways: 1. The overall distribution score of each chart is displayed as high (red), medium (orange), or low (green) seasonality at the top of each chart. 2. The degree of deviation of the events is indicated by the high (red) and medium (orange) seasonality threshold lines on the charts. 3. The maximum confidence level of each bar is displayed as high (red), medium (orange), or low (green). The default confidence level thresholds are as follows: • High: 99-100% • Medium: 95-99% • Low: 0-95% To modify the default confidence level thresholds of the charts, see “Editing confidence thresholds of Seasonal Event Graphs” on page 430.

Understanding graphs The four seasonal event graphs illustrate event seasonality. The graphs depict independent observations. For example, if the Hour of the day graph indicates a high confidence level for 5 p.m., and the Minute of the hour graph indicates a high confidence level for minute 35, it does not necessarily mean that the events all occur at 5:35 p.m. The 5 p.m. value can contain other minute values.

Chapter 7. Administering 427 Note: In some instances, Minute of the hour is indicated as having a high confidence level but the overall confidence level of seasonality is low. This is due to the high-level statistic that does not include minute of the hour due to poll cycle of monitors. Note: In some instances, the overall confidence level of a chart is indicated as high although none of the bars in the graph are in the red zone. An example of this is a system failure due to high load and peak times, with no failure outside of these times. The seasonal event graphs Count refers to the number of observations that are recorded in each graph. There is a maximum of one observation for each minute, hour, day, and date range. Therefore, the count for each of the graphs can differ. For example, if an event occurs at the following times: 10:31 a.m., 1 June 2013 10:31 a.m., 2 June 2013 10:35 a.m., 2 June 2013 There is a count of two observations for 10 a.m., two observations for minute 31, and one observation for minute 35.

Viewing seasonal event graphs for a seasonal event You can view seasonal event graphs for the seasonal events that are displayed in the View Seasonal Events portlet.

Before you begin To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To view seasonal event graphs for a seasonal event, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL in the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Show Seasonal Event Graphs.

Results The Seasonal Event Graphs portlet displays the bar charts and confidence levels for the selected seasonal event. For more information about charts and threshold levels, see the “Seasonal Event Graphs” on page 427 topic.

Viewing historical events from seasonality graphs You can view a list of historical events from seasonality graphs.

Before you begin To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To view a list of historical events from seasonality graphs, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL in the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Show Seasonal Event Graphs. 5. In the Seasonal Event Graphs tab, you can choose to view all of the historical events for a seasonal event, or filter the historical events by selecting bars in a graph. a) To view all of the historical events for a seasonal event, select Show Historical Events in the Actions drop-down list.

428 IBM Netcool Operations Insight: Integration Guide b) To view the historical events for specific times, hold down the Ctrl key and click the specific bars in the graphs. Select Show Historical Events for Selected Bars in the Actions drop-down list. Multiple bars that are selected from one chart are filtered by the OR condition. For example, if you select the bars for 9am or 5pm in the Hour of the Day graph, all of the events that occurred between 9am and 10am and all events that occurred between 5pm and 6pm are displayed in the Historical Event portlet. Multiple bar that are selected from more than one graph are filtered by the AND condition. For example, if you select the bar for 9am in the Hour of the Day graph and Monday in the Day of the Week graph, all of the events that occurred between 9am and 10am on Mondays are displayed in the Historical Event portlet.

Results The historical events are listed in the Historical Event portlet.

Exporting seasonal event graphs for a specified seasonal event to Microsoft Excel You can export seasonal event graphs for a specified seasonal event to a Microsoft Excel spreadsheet from a supported browser.

Before you begin You view seasonal event graphs for the seasonal events that are displayed in the View Seasonal Events portlet. To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

About this task In addition to exporting seasonal event graphs to a Microsoft Excel spreadsheet, you also export the historical event data and seasonal event data and confidence levels for the seasonal event that you selected. Currently, there is no way to export only the seasonal event graphs.

Procedure To export seasonal event graphs for a specified seasonal event to a Microsoft Excel spreadsheet, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration from the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Show Seasonal Event Graphs. 5. From the Actions menu, select Export Seasonal Event Graphs. After a short time, the Download export results link displays. 6. Click the link to download and save the Microsoft Excel file.

Results The Microsoft Excel file contains a spreadsheet with the following tabs: • Seasonal Data: This tab contains the seasonal event data and confidence levels for the seasonal event that you selected. • Seasonality Charts: This tab contains the seasonal event graphs for the seasonal event that you selected. • Historical Events: This tab contains the historical event data for the seasonal event that you selected. • Export Comments: This tab contains any comments relating to the export for informational purposes (for example, if the spreadsheet headers are truncated, or if the spreadsheet rows are truncated).

Chapter 7. Administering 429 For more information about charts and threshold levels, see the “Seasonal Event Graphs” on page 427 topic.

Editing confidence thresholds of Seasonal Event Graphs You can edit the default confidence level thresholds of the Seasonal Event Graphs.

About this task The confidence level of the data in the charts is displayed in two ways: 1. The overall distribution score of each chart is displayed as high (red), medium (orange), or low (green) seasonality at the top of each chart. 2. The degree of deviation of the events is indicated by the high (red) and medium (orange) seasonality threshold lines on the charts. 3. The maximum confidence level of each bar is displayed as high (red), medium (orange), or low (green). The default confidence level thresholds are as follows: • High: 99-100% • Medium: 95-99% • Low: 0-95% To modify the default confidence level thresholds of the charts, see “Editing confidence thresholds of Seasonal Event Graphs” on page 430.

Procedure To edit the default confidence level threshold, complete the following steps: 1. To generate a properties file from the command-line interface, use the following command:

nci_trigger server / NOI_DefaultValues_Export FILENAME directory/filename

where SERVER The server where Event Analytics is installed. The user name of the Event Analytics user. The password of the Event Analytics user. directory The directory where the file is stored. filename The name of the properties file. For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Export FILENAME /tmp/seasonality.props

2. To modify the confidence level thresholds, edit the default values of the following parameters: • level_threshold_high = 99 • level_threshold_medium = 95 • level_threshold_low = 0 Note: Other property values are overwritten by the generated properties file. You might need to update other property values. For a full list of properties, see “Generated properties file” on page 270. 3. To import the modified properties file into Netcool/Impact, use the following command:

430 IBM Netcool Operations Insight: Integration Guide nci_trigger SERVER / NOI_DefaultValues_Configure FILENAME directory/filename

For example:

./nci_trigger NCI impactadmin/impactpass NOI_DefaultValues_Configure FILENAME /tmp/seasonality.props

Historical events You can view a list of historical events for one or more seasonal events in the table that displays in the Historical Event portlet. You can also export the data associated with the list of historical events for the associated seasonal events to a spreadsheet. The Historical Event portlet displays a table with the following default columns: Summary Displays the description of the historical event. Node Displays the managed entity from which the historical event originated. The managed entity could be a device or host name, service name, or other entity. Severity Displays the severity of the historical event. The following list identifies the possible values that can display in the Severity column: • 0: Clear • 1: Indeterminate • 2: Warning • 3: Minor • 4: Major • 5: Critical FirstOccurrence Displays the date and time in which the historical event was created or first occurred. The date is expressed as month, day, year. The time is expressed as hours:, minutes:, seconds. The time also indicates whether AM or PM. For example: Apr 13, 2015 4:45:17 PM. LastOccurrence Displays the date and time in which the historical event was last updated. The date is expressed as month, day, year. The time is expressed as hours:, minutes:, seconds. The time also indicates whether AM or PM. For example: Jun 2, 2015 5:54:49 PM. Acknowledged Indicates whether the historical event has been acknowledged: • 0: No • 1: Yes The historical event can be acknowledged manually or automatically by setting up a correlation rule. Tally Displays an automatically maintained count of the number of historical events associated with a seasonal event.

Chapter 7. Administering 431 Viewing historical events for a seasonal event You can view a list of historical events for a seasonal event in the table that displays in the Historical Event portlet.

Before you begin You view seasonal events for which you want a list of historical events in the View Seasonal Events portlet. To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To view a list of historical events for a seasonal event, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL from the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Show Historical Events.

Results The historical events are listed in the table that displays in the Historical Event portlet.

Exporting historical event data You can export historical event data to a spreadsheet from Firefox or Internet Explorer.

Before you begin You first view seasonal events for which you want a list of historical events in the View Seasonal Events portlet. To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To export historical event data, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL from the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Show Historical Events. The historical events are listed in the table that displays in the Historical Event portlet. 5. Select one or more historical events from the table that displays in the Historical Event portlet. 6. To copy the selected historical events: a) In Firefox, to copy the data from the displayed clipboard click Ctrl+C followed by Enter. b) In Internet Explorer, to copy the data from the displayed clipboard right-click on the selected historical event and select Copy Ctrl+C from the drop down menu. 7. Paste the historical event data to your spreadsheet.

Exporting historical event data for a specified seasonal event to Microsoft Excel You can export historical event data for a specified seasonal event to a Microsoft Excel spreadsheet from a supported browser.

Before you begin You first view seasonal events for which you want a list of historical events in the View Seasonal Events portlet. To access the View Seasonal Events portlet, users must be assigned the ncw_analytics_admin role.

432 IBM Netcool Operations Insight: Integration Guide About this task In addition to exporting historical event data to a Microsoft Excel spreadsheet, you also export the seasonal event charts and seasonal event data and confidence levels for the seasonal event that you selected. Currently, there is no way to export only the historical event data.

Procedure To export historical event data to a Microsoft Excel spreadsheet, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration from the configuration table. 3. Select a seasonal event from the events table. 4. Right-click the seasonal event and select Show Seasonal Event Graphs. 5. From the Actions menu, select Export Seasonal Event Graphs. After a short time, the Download export results link displays. 6. Click the link to download and save the Microsoft Excel file.

Results The Microsoft Excel file contains a spreadsheet with the following tabs: • Seasonal Data: This tab contains the seasonal event data and confidence levels for the seasonal event that you selected. • Seasonality Charts: This tab contains the seasonal event graphs for the seasonal event that you selected. • Historical Events: This tab contains the historical event data for the seasonal event that you selected. • Export Comments: This tab contains any comments relating to the export for informational purposes (for example, if the spreadsheet headers are truncated, or if the spreadsheet rows are truncated).

Related events Use the related events function to identify and show events that are historically related and to deploy chosen correlation rules, which are derived from related events configurations. You can create a pattern based on a related events group. The pattern applies the events in the group, which are specific to a resource, to any resource. The related events function is accessible through three portlets. • The Configure Analytics portlet. Use this portlet to create, modify, run, and delete related events configurations. • The View Related Events portlet. Use this portlet to review the events and event groups that are derived from a related events configuration and to deploy correlation rules. • The Related Event Details portlet. Use this portlet to access more detail about an event or an event group. To access the View Related Events portlet, users must be assigned the ncw_analytics_admin role. The related events function uses an algorithm with the event database columns you select to determine relationships between events. Related events find signatures and patterns that occur together in the historic event stream. This discovery allows subject matter experts to easily review the detected signatures and derive correlation rules from related events configurations, without having to write correlation triggers or policies. This diagram shows the relationship between the components for the related events functions.

Chapter 7. Administering 433 Figure 15. Related events architecture overview

Work with related events Use the View Related Events portlet to work with related events and related event groups that are derived from your related events configuration. To access the View Related Events portlet, users must be assigned the ncw_analytics_admin role. In the Configuration, Group, or Event tables you can right-click on a group, a configuration, or the All container and a menu is displayed. The menu lists some of the following actions for you to select. Watch For more information about this action, see “Watching a correlation rule” on page 448. Deploy For more information about this action, see “Deploying a correlation rule” on page 449. Archive For more information about this action, see “Archiving related events” on page 445. Delete This action is only available from within the Archived tab. If you want to delete event groups from the system, choose this action.

434 IBM Netcool Operations Insight: Integration Guide Reset performance statistics For more information about this action, see “Viewing performance statistics for a correlation rule” on page 450. New This action is only available from within the Archived tab. If you choose this action, your selected row reinstates into the New tab. Copy Choose this action if you want to copy a row, which you can then paste into another document. Within the View Related Events portlet, in the New, Watched, Active, Expired, or Archived tabs, four tables display information about your related events. Configuration table Displays a list of the related event configurations. Group Sources table Displays the source information for related event groups based on the configuration and created patterns. Groups table Displays the related event groups for a selected configuration. Events table Displays the related events for a selected configuration or a selected group. A performance improvement implemented in V1.6.1 ensures that the View Related Events portlet displays Events, Groups, and Groups Sources more quickly once an item is selected. As part of this update, each tab in the View Related Events portlet now lists all configurations in the panel on the left of the portlet following the successful run of a configuration. Configurations are displayed in the panel even if there are no events or groups in a particular state for a given configuration. If no data exists for a particular state, the panels will display a No items to display message. The configuration will be listed in all five tabs, New, Watched, Active, Expired, and Archived. Right-click on a configuration in the Configuration table to display a list of menu items. You can select the following actions from the menu list. Watch For more information about this action, see “Watching a correlation rule” on page 448. Deploy For more information about this action, see “Deploying a correlation rule” on page 449. Archive For more information about this action, see “Archiving related events” on page 445. Copy Choose this action if you want to copy a row, which you can then paste into another document. Right-click on a pattern in the Group Sources table to display a list of menu items. You can select the following actions from the menu list. Edit Pattern For more information about this action, see “Editing an existing pattern” on page 465. Delete Pattern For more information about this action, see “Deleting an existing pattern” on page 465. Copy Choose this action if you want to copy a row, which you can then paste into another document. Right-click on a group name in the Groups table to display a list of menu items. You can select the following actions from the menu list. Show details For more information about this action, see “Viewing related events details for a seasonal event” on page 436. Create Pattern For more information about this action, see “Creating patterns” on page 452. Unmark as reviewed For more information about this action, see “Marking a related events group as reviewed” on page 437. Mark as reviewed For more information about this action, see “Marking a related events group as reviewed” on page 437. Watch For more information about this action, see “Watching a correlation rule” on page 448. Deploy For more information about this action, see “Deploying a correlation rule” on page 449. Archive For more information about this action, see “Archiving related events” on page 445. Delete This action is only available from within the Archived tab. If you want to delete event groups from the system, choose this action.

Chapter 7. Administering 435 Reset performance statistics For more information about this action, see “Viewing performance statistics for a correlation rule” on page 450. New This action is only available from within the Archived tab. If you choose this action, your selected row reinstates into the New tab. Copy Choose this action if you want to copy a row, which you can then paste into another document. Right-click on an event in the Events table to display a list of menu items. You can select the following actions from the menu list. Show details For more information about this action, see “Viewing related events details for a seasonal event” on page 436. Copy Choose this action if you want to copy a row, which you can then paste into another document. Within the View Related Events portlet, you can also complete the following types of tasks. • View related events. • View related events by group. • Sort a related events view. • View performance statistics for a deployed correlation rule. Within the Related Event Details portlet, you can also complete the following types of tasks. • Change the pivot event. • Work with correlation rules and related events. • View events that form a correlation rule. • Select a root cause event for a correlation rule

Viewing related events In the View Related Events portlet, you can view a listing of related events as determined by related events configurations that ran.

Procedure 1. Log in to the Dashboard Application Services Hub as a user with the ncw_analytics_admin role. 2. In the Dashboard Application Services Hub navigation menu, go to the Insights menu. 3. Under View Analytics, select View Related Events. 4. By default, within the View Related Events portlet the New tab opens, this tab lists related events with a status of New.

What to do next If you want to see related events with another status, select the relevant toolbar button within the View Related Events portlet toolbar.

Viewing related events details for a seasonal event You can view related event details for a seasonal event in the Related Event Details portlet.

Before you begin To access the Seasonal Event Rules portlet, users must be assigned the ncw_analytics_admin role.

Procedure To view a list of historical events for a seasonal event, complete the following steps. 1. Open the View Seasonal Events portlet. 2. Select a specific configuration or ALL in the configuration table. 3. Select a seasonal event in the events table. 4. Right-click the seasonal event and select Show Related Event Details.

436 IBM Netcool Operations Insight: Integration Guide Results The Related Event Details portlet displays the related event details.

Viewing related events by group From the full list of related events, you can view only the related events that are associated to a specific group.

About this task A related events configuration can contain one or many related events groups. A related events group is determined by a related events configuration and a related events group can be a child of one or more related events configurations. Note: • Discovered Groups and any “Suggested patterns” on page 465 are displayed in the Group Sources table of the View Related Events portlet. Any groups that are covered by a suggested pattern will not appear under the list of groups associated with Discovered Groups. A group that is a member of a suggested patten will only show up under the events of Discovered Groups once the suggested patten is deleted. • You might see a different number of Unique Events in a related events group when a Relationship Profile of Strong has been selected for the events in the configuration. This is caused by the same events being repeated more than once.

Procedure 1. Start the View Related Events portlet, see “Viewing related events” on page 436. 2. Within any tab, in the Configuration table, expand the root node All. The list of related events configurations display. 3. In the Configuration table, select a related events configuration. The list of related events groups display in the Group Sources and Groups tables and the related events display in the Events table. 4. In the Group table, select a group. The Event table updates and displays only the events that are associated to the selected group.

Viewing related events in the Event Viewer To see the grouping of related events in the Event Viewer, you must apply a view that uses the IBM Related Events relationship.

Procedure 1. Open the Event Viewer. 2. Click Edit Views. 3. Select the Relationships tab. 4. Select IBM Related events from the drop-down menu. 5. Click Save.

Results This relationship is used to present the results of correlations generated by the Related Events Analytics functionality.

Marking a related events group as reviewed The review status for a related events group can be updated in the View Related Events portlet.

About this task In the View Related Events portlet, within the group table you can modify the review status for a related events group. The review status values that are displayed indicates to Administrators whether related events groups are reviewed or not.

Chapter 7. Administering 437 In the View Related Events portlet, in the Groups table you can modify the review status for a related events group. The review status values that are displayed indicates to Administrators the review status of the related events groups. Related events groups can display these review status values. Yes. The group is reviewed. No. The group is not reviewed. To mark a related events group as reviewed or not reviewed, complete the following steps.

Procedure 1. View related events, see “Viewing related events” on page 436. 2. In the View Related Events portlet, within the group table, select a line item, which represents a group, and right-click. A menu is displayed. 3. From the menu, select Mark as Reviewed or Unmark as Reviewed. A success message in a green dialog box displays.

Results The values in the Reviewed column are updated, to one of the following values Yes, No.

When you enable sorting for the group table, you can sort on the Yes or No values.

Sorting a related events view Within a related events view, it is possible to sort the information that is displayed.

Before you begin Within the View Related Events portlet, select the tab view where you want to apply the sorting.

About this task Sorting by single column or multiple columns is possible within either the Configuration, Group or Event table. Sorting within the Group or Event table can be done independently or in parallel by using the sorting arrows that display in the table column headings. When you apply sorting within the Configuration table the configuration hierarchy disappears, but the configuration hierarchy reappears when you remove sorting. For more details about rollup information, see “Adding columns to seasonal and related event reports” on page 278. Sorting by single column or multiple columns is possible within the Configuration, Group Sources, Groups or Event table. Sorting within the Groups or Event table can be done independently or in parallel by using the sorting arrows that display in the table column headings. When you apply sorting within the Configuration table the configuration hierarchy disappears, but the configuration hierarchy reappears when you remove sorting. For more details about rollup information, see “Adding columns to seasonal and related event reports” on page 278.

Procedure 1. In either the Configuration, Group or Event table, hover the mouse over a column heading. Arrows are displayed, hover the mouse over the arrow, one of the following sort options is displayed. Click to sort Ascending Click to sort Descending Do not sort this column 2. In either the Configuration, Group Sources, Groups or Event table, hover the mouse over a column heading. Arrows are displayed, hover the mouse over the arrow, one of the following sort options is displayed. Click to sort Ascending

438 IBM Netcool Operations Insight: Integration Guide Click to sort Descending Do not sort this column 3. Left-click to select and apply your sort option, or left click a second or third time to view and apply one of the other sort options. 4. For sorting by multiple column, apply a sort option to other column headings. Sorting by multiple columns is not limited, as sorting can be applied to all columns.

Results The ordering of your applied sort options, is visible when you hover over column headings. The sorting options that you apply are not persistent across portlet sessions when you close the portlet the applied sorting options are lost.

Filtering related events Filtering capability is possible on the list of related events within the View Related Events portlet.

Procedure 1. Start the View Related Events portlet, see “Viewing related events” on page 436 2. Within the toolbar, in the filter text box, enter the filter text that you want to use. Filtering commences as you type.

Results The event list is reduced to list only the events that match the filter text in at least one of the displayed columns.

What to do next To clear the filter text, click the x in the filter text box. After you clear the filter text, the event list displays all events.

Exporting related events for a specific configuration to Microsoft Excel You can export related events for a specific configuration to a Microsoft Excel spreadsheet from a supported browser.

Before you begin You view related events for one or more configurations in the View Related Events portlet. To access the View Related Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To export related events for a specific configuration to a Microsoft Excel spreadsheet, complete the following steps. 1. Open the View Related Events portlet. 2. Select a specific configuration from the configuration table. 3. Click the Export Related Events button in the toolbar. After a short time, the Download export results link displays. 4. Click the link to download and save the Microsoft Excel file.

Results The Microsoft Excel file contains a spreadsheet with the following tabs: • Report Summary: This tab contains a summary report of the configuration that you selected. • Groups Information: This tab contains the related events groups for the configuration that you selected.

Chapter 7. Administering 439 • Groups Instances: This tab contains a list of all the related events instances for all of the related events groups for the configuration that you selected. • Group Events: This tab contains a list of all the events that occurred in the related events groups for the configuration that you selected. • Instance Events: This tab contains a list of all the events that occurred in all of the related events instances for all the related events groups for the configuration that you selected. • Export Comments: This tab contains any comments relating to the export for informational purposes (for example, if the spreadsheet headers are truncated, or if the spreadsheet rows are truncated).

Exporting selected related events groups to Microsoft Excel You can export related events groups for a specific configuration to a Microsoft Excel spreadsheet from a supported browser.

Before you begin You view related events for one or more configurations in the View Related Events portlet. To access the View Related Events portlet, users must be assigned the ncw_analytics_admin role.

Procedure To export related events groups for a specific configuration to a Microsoft Excel spreadsheet, complete the following steps. 1. Open the View Related Events portlet. 2. Select a specific configuration from the configuration table. 3. Select multiple related event groups by using the Ctrl key and select method. (You can also select multiple related events groups by using the click and drag method.) 4. After selecting multiple related event groups, right click on one of the selected groups and select the Export Selected Groups button in the toolbar. After a short time, the Download export results link displays. 5. Click the link to download and save the Microsoft Excel file.

Results The Microsoft Excel file contains a spreadsheet with the following tabs: • Report Summary: This tab contains a summary report of the configuration that you selected. • Groups Information: This tab contains the related events groups for the configuration that you selected. • Groups Instances: This tab contains a list of all the related events instances for all of the related events groups for the configuration that you selected. • Group Events: This tab contains a list of all the events that occurred in the related events groups for the configuration that you selected. • Instance Events: This tab contains a list of all the events that occurred in all of the related events instances for all the related events groups for the configuration that you selected. • Export Comments: This tab contains any comments relating to the export for informational purposes (for example, if the spreadsheet headers are truncated, or if the spreadsheet rows are truncated).

Expired related events An active related event group expires if no live events matching that related event group come into the Object Server for a period of six months. This six-month period is known as the automated expiry time. When the automated expiry time is reached for an active related events group, the group and its related events are moved to the Expired tab within the View Related Events portlet. Even though the expired groups and related events are visible in the Expired tab, you must acknowledge that the group is expired. In the Expired tab, right-click on the group that you want to acknowledge. A menu is displayed, from the menu select Validate. The default automated expiry time for an active

440 IBM Netcool Operations Insight: Integration Guide configuration is six months. To change the expiry time see “Changing the expiry time for related events groups” on page 412. To perform other actions on related events within the Expired tab, right-click on a group or event and a menu is displayed. From the menu, select the action that you want to take.

Impacts of newly discovered groups on existing groups An existing related events group can be replaced by newly discovered related event group, or the newly discovered group can be ignored. The following bullet points describe the Event Analytics functions for management of newly discovered groups and existing groups. • If a newly discovered group is a subset or same as an existing group within Watched, Active, Expired, or Archived, then Event Analytics ignores the newly discovered group. • If a newly discovered group is a superset of an existing group within New, then Event Analytics deletes the existing group and displays the newly discovered group in New. Otherwise, no changes occur with the existing group. • If a newly discovered group is a superset of an existing group within Watched, Active, or Expired, then Event Analytics moves the existing group to Archived, and displays the newly discovered group in Watched, Active, or Expired. • If a newly discovered group is a superset of an existing group within Archived, then Event Analytics adds the newly discovered group in to New and leaves the Archived group where it is.

Extra details about related events Use the Related Event Details portlet to access extra details about related events. Within the Related Event Details portlet, you can complete the following types of tasks. • View the occurrence time of an event. • Switch between tabulated and charted event information. • Remove an event from a related events group. Only one instance of the Related Event Details portlet can be open at any stage. If you select Show Details for an event or an event group in the View Related Events portlet and the Related Event Details portlet is already open, the detail in the Related Event Details portlet refreshes to reflect your selected event or event group.

Viewing the occurrence time of an event You can get details about the time that an event occurred.

About this task When you look at the occurrence time of an event, you might be able to relate this event to some other events that occurred around the same time. Within a particular event group, the same event might occur multiple times. For example, if the group occurs 10 times within the time period over which the related events report is run, then there are 10 instances of the group. The event might occur in each of those group instances, resulting in 10 occurrence times for that event. Events in strong related events groups appear in all group instances, but events in medium or weak related events groups might appear in a subset of the group instances. This information is visible in the Related Event Details portlet by switching between different instances in the Event Group Instance Table as explained in the following procedure.

Procedure 1. Start the View Related Events portlet, see “Viewing related events” on page 436. 2. Within the View Related Events portlet, in the Events table select an event or in the Group table select an event group, and right-click. A menu is displayed. 3. From the menu, select Show Details and a Related Event Details portlet opens. 4. Within the Related Event Details portlet, in the Events tab, two tables are displayed.

Chapter 7. Administering 441 Related Event Group Instances table This table lists instances of the related event group. Date and Time Date and time when the group instance occurred. The time of the group instance is set to the occurrence time of the event that you selected. Unique Events Indicates the number of unique events within each group instance, where a unique event is an event with a unique Tivoli Netcool/OMNIbus ObjectServer serial number. The unique events for a selected related event group instance are listed in the Events table to the right. Note: The number of rows in the Events table is always equal to the number displayed in the Unique Events field. Within a related group instance, an event with the same event identifier can recur multiple times, and each time the event will have a different serial number. When the event with that identifier first occurs, it is assigned a unique Tivoli Netcool/OMNIbus ObjectServer serial number. If that event is deleted and another event with the same identifier occurs again within this same group instance, then this new event is assigned a new serial number. At that point the Unique Events count in this column would be 2. Contains Pivot Event Indicates whether this group instance contains the pivot event. For more information about the pivot event, see “Changing the pivot event” on page 446. Events table Lists the unique events within the group instance selected in the Related Event Group Instances table. Note: When you configure Event Analytics using the wizard, it is best practice to add the Identifier column to the Instance report fields list. This ensures that you can see the event identifier value for each event unique event listed in this table. Offset Displays offset time relative to the pivot event. Note: This column displays Not Applicable if the pivot event is not in the selected group instance. For more information about the pivot event, see “Changing the pivot event” on page 446. Time First occurrence time of this unique event. Instances Number of related group instances in which this event occurs. For related event groups with a strong profile, all events occur in all related group instances; for example a value of 7/7 in the Instances column denotes that this is a related event groups with a strong profile. Severity Severity of this event. Node Value of the node column for this event. Summary Description of the event.

Example Assume that a related event group has been identified made up of events with the following two event identifiers: • JOPLBSMMDM Excessive ACN Transmission FailuresCP1 LP/9 AP/1 1 0x4374430 ( 70730800 ): let's label this event identifier A. • COLUBSM CELL 1803DOOR_OPEN 1 0x14D1 ( 5329 ): let's label this event identifier B.

442 IBM Netcool Operations Insight: Integration Guide This related event group occurred three times, so that there are three related event group instances, as follows: • Instance 1: 10 January • Instance 2: 11 January • Instance 3: 12 January Let's look more closely at the related event group instance that occurred on 10 January (instance 1). During this instance, the events occurred as shown in the following table. As you can see from the table, there are seven unique events in this related event group instance. Note: The two event identifiers have both occurred multiple times during this related event group instance.

Table 64. Unique events in instance 1 of the related event group Server Serial Identifier Label AGG_P 13452 JOPLBSMMDM Excessive ACN Transmission FailuresCP1 LP/9 AP/1 1 A 0x4374430 ( 70730800 ) AGG_P 21474 JOPLBSMMDM Excessive ACN Transmission FailuresCP1 LP/9 AP/1 1 A 0x4374430 ( 70730800 ) AGG_P 22485 COLUBSM CELL 1803DOOR_OPEN 1 0x14D1 ( 5329 ) B AGG_P 22490 COLUBSM CELL 1803DOOR_OPEN 1 0x14D1 ( 5329 ) B AGG_P 24579 JOPLBSMMDM Excessive ACN Transmission FailuresCP1 LP/9 AP/1 1 A 0x4374430 ( 70730800 ) AGG_P 24595 COLUBSM CELL 1803DOOR_OPEN 1 0x14D1 ( 5329 ) B AGG_P 25284 COLUBSM CELL 1803DOOR_OPEN 1 0x14D1 ( 5329 ) B

Now, assume that this related event group has a strong relationship profile; i.e. the underlying event identifiers for each of these events occur in all related group instances. Based on this information, the Related Event Group Instances table might look like this. Note, in particular, the January 10 related event group instance, with seven unique events.

Table 65. Related Event Group Instances Date and Time Unique Events Contains Pivot Event January 10, 2020 10:40 AM 7 Yes January 11, 2020 08:13 AM 11 Yes January 12, 2020 03:18 AM 5 Yes

When the January 10 related event group instance is selected. the Events table might look like this. Note that each of the underlying event identifier for each of these events occurs in all three related event group instances, so the Instances column has the value 3/3 for each event.

Table 66. Events table Offset Time Instances Severity Node Summary - 00:19:00 January 10, 3/3 Clear Server123 Some 2020 10:40 AM description - 00:14:00 January 10, 3/3 Clear Server123 Some 2020 10:45 AM description

Chapter 7. Administering 443 Table 66. Events table (continued) Offset Time Instances Severity Node Summary 00:00:00 January 10, 3/3 Clear Server 456 Some 2020 10:59 AM description 00:02:00 January 10, 3/3 Clear Server 456 Some 2020 11:01 AM description 00:13:00 January 10, 3/3 Clear Server123 Some 2020 11:12 AM description 00:22:00 January 10, 3/3 Clear Server 456 Some 2020 11:21 AM description 00:33:00 January 10, 3/3 Clear Server 456 Some 2020 11:32 AM description

Switching between tabulated and charted event information Event information is visible in table or chart format, within the Related Event Details portlet.

About this task A pivot event is an event that acts as a pivot around which you can extrapolate related event occurrences, in relation to the pivot event occurrence. To view the event distribution of a pivot event, complete the following steps to switch from tabulated event information to charted event information within the Related Event Details portlet.

Procedure 1. Start the View Related Events portlet, see “Viewing current analytics configurations” on page 407. 2. Within the View Related Events portlet, in the Events table select an event or in the Group table select an event group, and right-click. A menu is displayed. 3. From the menu, select Show Details and a Related Event Details portlet opens. 4. Within the Related Event Details portlet, in the Events tab, two tables are displayed. Event Group Instance Table: This table lists each instance of the event group and the time at which the instance occurred. The time of the group instance is set to the occurrence time of the event that you selected. Events Table: This table lists the events for a selected group instance. 5. From the Events tab toolbar, select Timeline. The event information displays in chart format. For information about understanding the timeline chart, see “Understanding the timeline chart” on page 445 Note: The timeline chart scale is displayed as seconds (s), minutes (m), or hours (h). If many timelines are displayed, users might need to scroll down to view all of the timelines. The timeline chart scale is anchored in place at the top of the Timeline view.

Results The timeline chart shows the event distribution for each event type in the group, relative to the pivot event. Each comb in the timeline chart represents an event type and the teeth represent the number of instances of the event type. The pivot event is not represented by a comb, but the pivot event instance is always at zero seconds, minutes, or hours. For a selected event group instance, the highlighted tooth on each comb denotes the event type instance relative to the pivot event, in seconds, minutes, or hours. The long summary labels under the combs in the timeline chart are truncated. Move the mouse cursor over a truncated summary label to see the tooltip that shows the full summary label.

444 IBM Netcool Operations Insight: Integration Guide What to do next • If there are many event types to view, use the pagination function in addition to scrolling. In the pagination toolbar, select the page to view and the page size. • If you want to change the pivot event, see “Changing the pivot event” on page 446. • If you want to revert to the tabulated event information, select Events from the Events tab toolbar.

Understanding the timeline chart Event information in the Related Event Details portlet is available in chart format. You can use the Related Event Details portlet to view more information about related events. For example, you can view charted event information on a timeline chart. For more information about how to view the charted event information, see “Switching between tabulated and charted event information” on page 444 The timeline chart shows the event distribution for each event type in the group, relative to the pivot event. The pivot event is always at zero seconds, minutes, or hours. Each comb in the timeline chart represents an event type and the teeth represent the number of instances of the event type. The blue event markers represent all the times the event occurred relative to the pivot event. The red event markers indicate the time that the event occurred in the selected group instance.

Removing an event from a related events group You can remove an event from a related events group.

About this task When you believe that an event is no longer related to other events in the related events group, you can remove the event from that events group. When you remove an event from a related events group, the event is hidden from the UI and the correlation process but the event is not deleted from the system. Complete the following steps to remove an event from a related events group.

Procedure 1. View event groups and events in the View Related Events portlet, see “Viewing related events” on page 436 and “Viewing related events by group” on page 437. 2. Within the View Related Events portlet, in the Group table select an event group or in the Event table select an event, and right click. A menu is displayed. 3. From the menu, select Show Details and a Related Event Details portlet opens. 4. Within the Related Event Details portlet, in either the Events tab on the events table or in the Correlation Rule tab, right-click on an event. A menu is displayed. 5. From the menu, select Remove Event. 6. A confirmation message is displayed, select Yes or No.

Results The event is removed from the group and no longer appears in the event list in either the Events tab and the Correlation Rule tab.

Archiving related events You can archive related events by archiving the related events group.

Before you begin View event groups and events in the View Related Events portlet, see “Viewing related events” on page 436 and “Viewing related events by group” on page 437.

Chapter 7. Administering 445 About this task When you believe that events within a related events group are no longer relevant, you can archive that group. Complete the following steps to archive a related events group.

Procedure • To archive a related events group within the View Related Events portlet, complete the following steps. 1. Within the View Related Events portlet, select the New, Watched, Active, or Expired tab. 2. In your chosen tab, within the Group table select an event group and right click. A menu is displayed. 3. From the menu, select Archive. • To archive a related events group within the Related Event Details portlet, complete the following steps. 1. Within the View Related Events portlet, select the New, Watched, Active, or Expired tab. 2. In your chosen tab, within the Group table select an event group or within the Event table select an event, and right click. A menu is displayed. 3. From the menu, select Show Details, a Related Event Details portlet opens. 4. Within the Related Event Details portlet, in either the Events or Correlation Rule tab, select Archive. A success message is displayed.

Results The related events group moves into the Archived tab in the View Related Events portlet.

What to do next Within the Archived tab, from the list of archived groups you can select a group and right click. A menu is displayed with a choice of tasks for your selected group. • If you want to move a group out of the Archived tab and into the New tab, from the menu select New. A number of actions can be performed with groups and events within the New tab, see “Work with related events” on page 434. • If you want to delete a related events group from the system, from the menu select Delete. This is the only way to delete a related events group from the system.

Changing the pivot event You can change a pivot event to view related events that are of interest.

About this task Use a pivot event as a baseline to determine a sequence of events in the group. A pivot event displays in the Related Event Details portlet. Within the Related Event Details portlet, a pivot event can be changed. Also, a pivot event history, of the 20 most recent pivot events, is available for you to revisit. • When you open the Related Event Details portlet from an event in the View Related Events portlet, that event becomes the pivot event within the Related Event Details portlet. • When you open the Related Event Details portlet from a group in the View Related Events portlet, one of the events from that group becomes the pivot event within the Related Event Details portlet. The pivot event is not always the parent event. Complete the following steps to change the pivot event.

Procedure 1. Within the View Related Events portlet, right-click on an event or a group. A menu is displayed. 2. From the menu, select Show Details. The Related Event Details portlet opens. 3. Within the Related Event Details portlet, information about the pivot event is displayed.

446 IBM Netcool Operations Insight: Integration Guide • In the Event Group Instance table, the Contains Pivot Event column reports if a group instance has a pivot event, or not. Some groups might not have a Pivot Event set because the event identity is different for these events. • In the Events table, the pivot event is identifiable by a red border. • Next to the Group Name entry, a Pivot Event link displays. To see more details about the pivot event, click the Pivot Event link and a More Information widow opens displaying details about the pivot event. 4. In the Related Event Details portlet, within the Events tab, in the Events table, right-click on the event you want to identify as the pivot event. A menu is displayed. 5. From the menu, select Set as Pivot Event.

Results Your selected event becomes the pivot event with a red border. Data updates in the timeline chart, in the Pivot Event link, in the Event Group Instance table and in the Events table.

What to do next Within the Related Event Details portlet, you can reselect one of your 20 recent pivot events as your current pivot event. From the Events tab toolbar, select either the forward arrow or back arrow to select one of the 20 recent pivot events.

Correlation rules and related events A correlation rule is a mechanism that enables automatic action on real-time events that are received by the ObjectServer, if a trigger condition is met. The result is fewer events in the Event Viewer for the operator to troubleshoot. Writing a correlation rule in code is complex but the related events function removes the need for administrators to code a correlation rule. Instead, the related events function derives a correlation rule from your related events configuration and deploys a correlation rule, all through the GUI. After the correlation rule is deployed in a live environment and if the trigger condition is met, then automatic action occurs. • The trigger condition is the occurrence of one or more related event types, from an event group, on the Tivoli Netcool/OMNIbus ObjectServer. Only one event must be the parent event. Related event types are derived from your related events configuration. • The automatic action is the automatic creation of a synthetic event with some of the properties of the parent event, and the automatic grouping of the event group events under this synthetic event.

Viewing events that form a correlation rule You can view the related events that form a correlation rule in the Related Event Details portlet.

About this task Administrators can view related events that form a correlation rule to understand associations between events. Complete the following steps to view related events that form a correlation rule.

Procedure 1. Start the View Related Events portlet, see “Viewing current analytics configurations” on page 407. 2. Within the View Related Events portlet, in the Events table select an event or in the Group table select an event group, and right-click. A menu is displayed. 3. From the menu, select Show Details and a Related Event Details portlet opens. 4. In the Related Event Details portlet, select the Correlation Rule tab.

Results A table displays with a list of the related events that make up the correlation rule.

Chapter 7. Administering 447 Selecting a root cause event for a correlation rule You can select the root cause event for the correlation rule

About this task When you select the root cause event for the correlation rule, the selected event becomes a parent event. A parent synthetic event is created with some of the properties from the parent event and a parent-child relationship is created between the parent synthetic event and the related events. When these events occur in a live environment, they display in the Event Viewer within a group as child events of the parent synthetic event. With this view of events, you can quickly focus on the root cause of the event, rather than looking at other related events. To select the root cause event for the correlation rule, complete the following steps. If you want to see automated suggestions about the root cause event for a group, see configuration details in “Adding columns to seasonal and related event reports” on page 278.

Procedure 1. View all events that form a correlation rule, see “Viewing events that form a correlation rule” on page 447 2. In the Related Event Details portlet, within the Correlation Rule tab, right-click an event and select Use Values in Parent Synthetic Event.

Results The table in the Correlation Rule tab refreshes and the Use Values in Parent Synthetic Event column for the selected event updates to Yes, which indicates this event is now the parent event. For a related events group, if all of the children of a parent synthetic event are cleared in the Event Viewer, then the parent synthetic event is also cleared in the Event Viewer. If another related event comes in for that same group, the parent synthetic event either reopens or re-creates in the Event Viewer, depending on the status of the parent synthetic event.

Watching a correlation rule You can watch a correlation rule and monitor the rule performance before you deploy the rule for the rule to correlate live data.

Before you begin Complete your review of the related events and the parent event that form the correlation rule. If necessary, change the correlation rule or related events configuration.

About this task When you are happy with the correlation rule, you can choose to Watch the correlation rule. When you choose to Watch the correlation rule, the rule moves out of its existing tab and into the Watched tab within the View Related Events portlet. While the rule is in Watched, the rule is not creating synthetic events or correlating but does record performance statistics. You can check the rule's performance before you deploy the rule for the rule to correlate live data. Note: On rerun of a related events configuration scan, a warning message is displayed if any new groups are discovered that conflict with groups on which an existing watched rule is based. • Click OK to accept the warning and continue watching the existing rule. The new groups are ignored. • Click Cancel to ignore the warning and replace the existing watched rule with a new rule based on the newly discovered group. Note: any NEW groups which conflict with existing non-NEW groupsany already existing patterns cannot be edited. Only newly discovered patterns can be edited. Complete the following steps to Watch the correlation rule.

448 IBM Netcool Operations Insight: Integration Guide Procedure • Within the View Related Events portlet, perform the following steps: a) View related events by group, see “Viewing related events by group” on page 437. b) In the View Related Events portlet, within the group table, select either a related events group or a related events configuration and right click. A menu is displayed. c) From the menu, select Watch. • Within the Related Event Details portlet for a group or an event, perform the following steps: a) View related events or related event groups, see “Viewing related events” on page 436 and “Viewing related events by group” on page 437. b) Select an event or a related events group. – In the View Related Events portlet, within the group table, select a related events group and right click. A menu is displayed. – In the View Related Events portlet, within the event table, select an event and right click. A menu is displayed. c) From the menu, select Show Details. The Related Event Details portlet opens. d) In the Related Event Details portlet, within any tab, select Watch.

Results The rule displays in the Watched tab.

What to do next Within the Watched tab, monitor the performance statistics for the rule. When you are happy with the performance statistics consider “Deploying a correlation rule” on page 449.

Deploying a correlation rule You can deploy a correlation rule, for the rule to correlate live data.

Before you begin Complete your review of the related events and the parent event that form the correlation rule. If necessary, change the correlation rule or related events configuration.

About this task When you are happy with the correlation rule, you can choose to Deploy the correlation rule. When you choose to Deploy the correlation rule, the rule moves out of its existing tab and into the Active tab within the View Related Events portlet. Active rule algorithm works to identify the related events in the live incoming events and correlates them so the operator knows what event to focus on. Performance statistics about the rule are logged which you can use to verify whether the deployed rule is being triggered. Note: On rerun of a related events configuration scan, a warning message is displayed if any new groups are discovered that conflict with groups on which an existing deployed rule is based. • Click OK to accept the warning and continue deploying the existing rule. The new groups are ignored. • Click Cancel to ignore the warning and replace the existing deployed rule with a new rule based on the newly discovered group. Complete the following steps to Deploy the correlation rule.

Procedure • Within the View Related Events portlet. a) View related events by group, see “Viewing related events by group” on page 437.

Chapter 7. Administering 449 b) In the View Related Events portlet, within the groups table, select either a related events group or a related events configuration and right click. A menu is displayed. c) From the menu, select Deploy. • Within the Related Event Details portlet for a group or an event. a) View related events or related event groups, see “Viewing related events” on page 436 and “Viewing related events by group” on page 437. b) Select an event or a related events group. – In the View Related Events portlet, within the groups table, select a related events group and right click. A menu is displayed. – In the View Related Events portlet, within the events table, select an event and right click. A menu is displayed. c) From the menu, select Show Details. The Related Event Details portlet opens. d) In the Related Event Details portlet, within any tab, select Deploy.

Results The rule moves out of the New tab and into the Active tab within the View Related Events portlet.

What to do next When you establish confidence with the rules and groups that are generated by a related events configuration, you might want all the generated groups to be automatically deployed in the future. If you want all the generated groups to be automatically deployed, return to “Creating a new or modifying an existing analytics configuration” on page 408 and within the Configure Related Events window, tick the option Automatically deploy rules discovered by this configuration.

Viewing performance statistics for a correlation rule You can view performance statistics for a correlation rule in the View Related Events portlet, within the Watched, Active, or Expired tabs.

Performance statistics in the group table Times Fired: The total number of times the rule ran since the rule became active. Times Fired in Last Month: The last month time period is counted as 30 days instead of a calendar month. The total number of times that the rule is fired in the current 30 days. Time periods are calculated from the creation date of the group. Last Fired: The last date or time that the rule was fired. Last Occurrence I: The percentage of events that occurred from the group, in the last fired rule. Last Occurrence II: The percentage of events that occurred from the group in the second last fired rule. Last Occurrence III: The percentage of events that occurred from the group in the third last fired rule.

Performance statistics in the event table Occurrence: The number of times the event occurred, for all the times the rule fired.

Reset performance statistics You can reset performance statistics to zero for a group in the Watched, Active, or Expired tabs. To reset performance statistics, right-click on the group name and from the menu select Reset performance statistics. A message displays indicating that the operation will reset statistics data for the selected correlation rule. The message also indicates that you will not be able to retrieve this data. Click Yes to continue with the operation or No to stop the operation. A success message displays after you select Yes. Resetting performance statistics to zero for a group also causes the following columns to be cleared: Times Fired, Times Fired in Last Month, and Last Fired. Note that performance statistics are not

450 IBM Netcool Operations Insight: Integration Guide collected for the Archived tab. When a rule is moved between states, the performance statistics are reset. Every time an action is triggered by the rule the performance statistics increase.

Related Events statistics When sending some events a synthetic event is created, but the statistics can appear not to be updated. This is because there are delays in updating the related events statistics. These delays are due to the time window during which the related event groups are open, so that events can be correlated. The statistics (Times Fired, Times Fired in Last Month, Last Fired) are updated only when the Group Time to Live has expired. The sequence is; synthetic event is triggered, action is done, and the statistics are calculated later. Take the following query as an example:

SELECT GROUPTTL FROM RELATEDEVENTS.RE_GROUPS WHERE GROUPNAME = 'XXX';

There was an occurrence of the GROUPTTL being equal to 82800000 milliseconds, this is 23 hours. In this instance an update to the statistics wouldn't be visible to the user for 23 hours. If GROUPTTL is reduced to 10 seconds by running the following command:

UPDATE RELATEDEVENTS.RE_GROUPS SET GROUPTTL = 10000 WHERE GROUPNAME = 'XXX';

Subsequent tests will show that the statistics are updated promptly. An algorithm creates GROUPTTL based on historical occurrences of the events. There is no default value for GROUPTTL and no best practice recommendation. GROUPTTL should be determined and set on a per case basis. Data is displayed for the Times Fired, Times Fired in Last Month, and Last Fired columns for groups allocated to patterns. In previous versions, the group statistics were only updated for unallocated groups and not updated for groups allocated to a pattern. The new statistics represent the total occurrences for events with an active, watched or expired status. The Times Fired value increments every time there is an event matching the Event Identifier of a deployed group. This statistic is suppressed by default. It can be enabled by turning on the timesfired_group_stats_enable property. The Times Fired in Last Month value is the sum of events received in the last 30 days. This statistic is suppressed by default. It can be enabled by turning on the timesfired_group_stats_enable property. When the 30 days time period has passed and a new event comes in, this value resets back to 0. When an event is not firing for two or more months, the value persists and is not reset. In this case, the value in the Times Fired in Last Month column refers to the 30 days before the timestamp in the Last Fired column. The Last Fired value is the timestamp of the last time such an event was received. This statistic is suppressed by default. It can be enabled by turning on the timesfired_group_stats_enable property.

Performance statistics in the group table The statistics in the Times Fired column in the Group Sources panel represent the sum of incoming events, which match a given pattern. The value increments every time there is an incoming event that matches a pattern in a watched, active, or expired state. This statistic is active by default, and can't be turned off. When a group is part of a pattern, the statistics in the Times Fired column in the Groups panel represent the sum of incoming events that are part of the group and match a given pattern. The value increments every time there is an event, which is part of an active group, that matches an active pattern. This functionality is controlled with the timesfired_group_stats_enable property. By default, this statistic is disabled. To enable this functionality, complete the following steps:

Chapter 7. Administering 451 1. Export the current default IBM Netcool Operations Insight export property values to a file by running the following command:

./nci_trigger / NOI_DefaultValues_Export FILENAME

2. In the new file, change the timesfired_group_stats_enable value from false to true. 3. Import the updated properties file by running the following command:

./nci_trigger / NOI_DefaultValues_Configure FILENAME

When a group is not part of a pattern, the statistics in the Times Fired column in the Groups panel represent the sum of incoming events that are part of a group in active status. The value increments every time there is an event, which is part of an active group. This statistic is active by default, and can't be turned off

Creating patterns Groups of related events are discovered using Related Event analytics. Automatically discovered groups in the View Related Events portlet can be used to create patterns.

About patterns Use this information to understand how patterns are created and how they differ from related event groups.

Patterns and related event groups The use of patterns allows events with different Event Identifier fields to be grouped in the Event Viewer. Discovered groups can be deployed independently of a pattern. In this case, incoming events are matched by using the Event Identifier field and grouped in the Event Viewer by Resource Field. In contrast, the use of patterns allows events with different Event Identifier fields to be grouped in the Event Viewer. In this case, incoming events are matched by using the Event Type field. Matching events for deployed patterns are grouped by Resource in the Event Viewer. To allow a group of related events, with different Resource field values, to be allocated to a pattern, use name similarity or specify a regular expression in the pattern. To allow deployed patterns to group events across multiple resources, use name similarity or specify a regular expression in the pattern. Event Types By default, the Event Type field is set to the AlertGroup column in the Object Server alerts.status table. You can configure the system to use a different column or combination of columns by using the Event Analytics configuration wizard, as described in “Configuring event pattern processing” on page 267. When a pattern is manually created, at least one Event Type must be selected in the drop-down list. There is no maximum number of Event Types. Whichever related event groups are allocated to the pattern must have all the Event Types specified (and no extra ones). For example, if a pattern is created with Event Type set to NmosEventType and ITNMMonitor then only groups with related events that have both these event types can be allocated. In this example, if a group contains three related events with AlertGroup values of, in turn, NmosEventType, ITNMMonitor, and some other value, then this group is not a candidate for allocation to the pattern. It is not a candidate because its related events contain an Event Type that is not part of the pattern. Resources A resource can be a hostname (“server name”), or an IP address. By default the Resource field is set to the Node column in the Object Server alerts.status table. You can configure the system to use a different column by using the Event Analytics configuration wizard, as described in “Configuring event pattern processing” on page 267. If name similarity is disabled and no regular expression is specified for the pattern, then all the resource values for the events within a related events group must be the same.

452 IBM Netcool Operations Insight: Integration Guide Note: The check on the Resource field is performed by using the historical event data, not the related event data. However, most of the time the Resource value in the historical event data is the same as the Resource value in the related event data.

If many related events groups are allocated to a pattern, then the Resource value does not have to match across groups; however, the Resource value must match within a group. Name similarity is enabled by default in Netcool Operations Insight V1.5.0 and higher. When name similarity is enabled, the Resource values in the related events group (by default, the contents of the Node column) must be sufficiently similar based on the name similarity settings. The default name similarity settings require the lead characters to be the same and the text to be 90% similar. Warning: There is an important exception to this scenario. If the Node column contains IP addresses, then the different IP address values must match down to the subnet value; that is, the first, second, and third segment of the IP address must be the same. For example: • 123.456.789.10 matches 123.456.789.11. • 123.456.789.10 does not match 123.456.788.10.

Example For example, assume that Link up and Link down regularly occurs on Node A. Analytics detects the occurrence in the historical data and generates a specific grouping of those two events for Node A. Likewise, if Link up and Link down also regularly occurs on Node B, a grouping of those two events is generated but specifically for Node B. With generalization, the association of such events is encapsulated by the system as a pattern: Link up / Link down on any Node. In generalization terms, Link Up / Link Down represents the event type and Node* represents the resource.

Advantages of patterns A created pattern has the following advantages over a related event group: • For any instance of a pattern, not all of the events in the definition must occur for the pattern to apply. This is dependent on the Trigger Action settings. For more information about Trigger Action setting, see “Creating an event pattern” on page 459. • The pattern definition encompasses groups of events with the defined event types. • A single pattern can capture the occurrence of events on any resource. For example, with discovered groups, analytics only found historical events that occurred on a specific hostname, and created groups for each host name. If real time events happen on different host names in the future, the discovered groups will not capture them. However, patterns will discover the events because the event type is the same. • A pattern can encompass event groupings that were not previously seen in the event history. An event group that did not previously occur on a specific resource is identified by the pattern, as the pattern is not resource dependent, but event type specific. Note: when selecting an event type (during the event type configuration), the column that identifies the event type should be unique across multiple event groups. • A single pattern definition can encompass multiple event groups. Patterns will act on event types for different host names which might have occurred historically (discovered groups) or will happen in future real time events. For example, an event type could be "Server Shutting Down", "Server Starting Up", "Interface Ping Failure", and so on. Each group is resource specific, but an event pattern is event type specific. Therefore, an environment might have multiple groups for different resources, and an event pattern will encompass all of those different groups since their event type is the same.

Chapter 7. Administering 453 Extending patterns Using regular expressions and name similarity, you can enable the discovery of a pattern instance on more than one resource. By default, live events are processed for inclusion in event patterns by means of Exact matching of the resource or resources associated with that event. This means that the resource (or resources) associated with that event are checked against the resource values in the pattern. For each resource column, there must be an exact match between the resource column value in the live event and the expected resource value in the pattern. You can extend pattern matching functionality using the following methods to enable the discovery of a pattern instance on more than one resource: • Regular expressions • Name similarity

Regular expressions Using regular expressions you can define a regular expression to apply to the contents of the resource field or fields during pattern matching. Resource names that match the regular expressions are candidates to be included in a single pattern. You can optionally specify a regular expression when you create a pattern.

Name similarity Name similarity uses a string comparison algorithm to determine whether the resource names contained in two resource fields are similar. Name similarity is enabled by default, and is applied at two points in the process: • When patterns are suggested, as described in “Suggested patterns” on page 465. • When live events are correlated to identify pattern instances, as described in “Examples of name similarity” on page 457. The name similarity settings force the lead character to be the same and the main body of the resource name to be 90% similar. For more information on how to configure name similarity settings, see “Configuring name similarity” on page 297. Note: There is a notable exception to this. If the Node column (or whichever columns are used to store resource values) holds IP addresses then the IP address must match down to the subnet value. In an IPv4 environment, this means the first, second and third octets must be the same. For example, the following two IP addresses will match for the purposes of name similarity: • 123.456.789.10 • 123.456.789.11 However, the following two IP addresses will not match. • 123.456.789.10 • 123.456.788.11

Using the methods together Name similarity and regular expression functionality are not mutually exclusive. If name similarity is configured, you can also define regular expressions. Pattern matching is processed in the following order: 1. Exact match 2. Regular expression 3. Name similarity

454 IBM Netcool Operations Insight: Integration Guide Examples of pattern processing The following examples illustrate how pattern processing is performed.

Examples of multiple resource pattern processing This topic presents some very simple examples of event pattern processing with multiple resources fields.

Before you begin In order to process multiple resource columns, you must have configured multiple resource fields in the pattern definitions. For more information on creating pattern definitions, see “Creating an event pattern” on page 459.

About this task For the sake of simplicity, these examples keep the value of the event type constant, in order to present the impact of changing the settings associated with multiple resource columns. The event type is held in the AlertGroup field, and the value of event type used for this example is power, indicating that the event is associated with a power issue.

Example 1: Single event type value with two resource fields The resource information is distributed across two different event columns: • Node • NodeAlias Distribution of resource information across two different event columns simulates a scenario where resource data is held in different event columns. One possible reason for this is consolidation of event data from different sources within an organization. The following event snippet displays the dataset used in this example.

1 Identifier Node NodeAlias AlertGroup FirstOccurrence 2 event00000000 node00000000 aliasnode00000000 power 2020-04-07 00:11:00 3 event00000001 node00000001 aliasnode00000001 power 2020-04-07 00:09:00 4 event00000002 node00000002 aliasnode00000002 power 2020-04-07 00:07:00 5 event00000003 node00000003 aliasnode00000003 power 2020-04-07 00:05:00 6 event00000004 node00000004 aliasnode00000004 power 2020-04-07 00:03:00 7 event00000005 node00000005 aliasnode00000005 power 2020-04-07 00:01:00 8 event00000006 node00000006 aliasnode00000006 power 2020-04-06 23:59:00 9 event00000007 node00000007 aliasnode00000007 power 2020-04-06 23:57:00 10 event00000008 node00000008 aliasnode00000008 power 2020-04-06 23:55:00 11 event00000009 node00000009 aliasnode00000009 power 2020-04-06 23:53:00

The following configuration settings can be applied to this dataset: Name similarity Name similarity can be switched ON or OFF. For more information about name similarity, see “Extending patterns” on page 454. Multiple resource correlation logic parameter The repattern_multiresource_correlation_logic parameter specifies the Boolean logic to be applied by the event pattern processing system when resource data is held in multiple event columns. This parameter can take the values OR and AND. For more information about this parameter, see “Configuring multiple resource columns” on page 300.

Chapter 7. Administering 455 The following table shows how event pattern results vary when these settings are configured in different ways:

Table 67. Event patterns results Name similarity Multiple resource Number of event Number of events in correlation logic pattern instances each pattern instance 1 ON OR 1 10 events 2 ON AND 10 1 event 3 OFF OR 10 1 event 4 OFF AND 10 1 event

The results demonstrate the following rules: • Name similarity is ignored when AND logic is being applied. This can be seen from row 2, where even though name similarity is set to ON, the events are not grouped into a single pattern instance, meaning that the pattern processing treats all of the Node and NodeAlias values as if they are different, even though they meet the name similarity criteria. Note: The same rule applies to regular expressions. These are also ignored when AND logic is being applied. • AND logic is order specific, meaning that the resource values are strictly matched with the respective resource name. This can be seen from row 1, where a single pattern instance is produced, and this is because the resource values within Node and NodeAlias are consistent: the Node column always contains a resource value of the form node0000000x and the Node Alias column always contains a resource value of the form aliasnode0000000x.

Example 2: Single event type value with two resource fields including empty strings The resource information is distributed across two different event columns: • Node • NodeAlias Distribution of resource information across two different event columns simulates a scenario where resource data is held in different event columns. One possible reason for this is consolidation of event data from different sources within an organization. The following event snippet displays the dataset used in this example. Notice that some of the resource values are empty strings.

1 Identifier Node NodeAlias AlertGroup FirstOccurrence 2 event00000000 node00000000 power 2020-04-07 00:11:00 3 event00000001 node00000001 power 2020-04-07 00:09:00 4 event00000002 node00000002 power 2020-04-07 00:07:00 5 event00000003 node00000003 power 2020-04-07 00:05:00 6 event00000004 node00000004 power 2020-04-07 00:03:00 7 event00000005 node00000005 power 2020-04-07 00:01:00 8 event00000006 node00000006 power 2020-04-06 23:59:00 9 event00000007 node00000007 power 2020-04-06 23:57:00 10 event00000008 node00000008 power 2020-04-06 23:55:00 11 event00000009 node00000009 power 2020-04-06 23:53:00

456 IBM Netcool Operations Insight: Integration Guide The following configuration settings can be applied to this dataset: Name similarity Name similarity can be switched ON or OFF. For more information about name similarity, see “Extending patterns” on page 454. Multiple resource correlation logic parameter The repattern_multiresource_correlation_logic parameter specifies the Boolean logic to be applied by the event pattern processing system when resource data is held in multiple event columns. This parameter can take the values OR and AND. For more information about this parameter, see “Configuring multiple resource columns” on page 300. The following table shows how event pattern results vary when these settings are configured in different ways:

Table 68. Event patterns results Name similarity Multiple resource Number of event Number of events in correlation logic pattern instances each pattern instance 1 ON OR 1 10 events 2 ON AND 0 Not applicable 3 OFF OR 10 1 event 4 OFF AND 0 Not applicable

The results demonstrate the following rules: • Name similarity is effective when OR logic is being applied. This can be seen from row 1, where the events are not grouped into a single pattern instance. This happens because the OR logic enables name similarity to be applied to all of the events on both resource columns. By eliminating the leading and trailing edge characters, name similarity determines that all of the resource names are similar and this results in a single pattern instance. Note: The same rule applies to regular expressions. These are also effective when OR logic is being applied. • AND logic is only effective when an event being considered for inclusion in a pattern has more than one resource defined. This can be seen from rows 2 and 4, where no pattern instances at all are produced. • NULL and empty strings render the resource specified in the event invalid. This is illustrated by rows 2 and 4, where no pattern instances are created when AND logic is applied.

Examples of name similarity This topic presents examples of similar resource names that might be discovered by using the default name similarity settings. By default, name similarity is configured with the following default settings. For more information on these configuration parameters, see “Configuring name similarity” on page 297.

Chapter 7. Administering 457 Parameter Description Values name_similarity_default_threshold String comparison threshold. For 0.9 example, a similarity threshold value of 0.9 means that strings must match by at least that value to be considered similar. For more information about the threshold value, see Similarity threshold value. • 1 equates to identical strings. • 0 equates to completely dissimilar strings. name_similarity_default_lead_restriction Lead restriction. Number of 1 characters at the beginning of the string that must be identical. name_similarity_default_tail_restriction Tail restriction. Number of 0 characters at the end of the string that must be identical.

Based on these settings, the following event snippets present an example for resource name similarity analysis. Note that the resource name is stored in the resource column. NODE is the default resource column, but can be changed for the pattern event type.

1 NODE SUMMARY ALERTKEY Similar? 2 acme.env1.base.adm_chk_probeCheck System Alert SEV2 ABC adm_probe No 3 cnz.env2.base.adm_chk_reports System Alert SEV2 ABC adm_report No 4 abc.lyf.base.logs1 System Alert SEV2 DEF logs1 No 5 abc.gbs.stato.dotnetcore System Alert SEV2 ABC runtime_down No 6 caripa.env1.stato.dotnetcore System Alert SEV2 GHI runtime_down Yes 7 caripa.env1.stato.TNT System Alert SEV2 GHI runtime_down Yes 8 caripa.env1.stato.TNT System Alert SEV2 GHI runtime_down Yes 9 emperor.env3.stato.pythonRuntime System Alert SEV2 ABC runtime_down No 10 abc.env5.base.bash.total_cpu_noncore System Alert SEV2 DEF bash_cpu_noncore No 11 abc.cio.base.total_cpu_noncore System Alert SEV2 ABC bash_cpu_noncore No 12 banca.env1.base.bosh.jobstate.console System Alert SEV2 GHI job_fail No

As a result of this similarity analysis, the resource names in the NODE column for the events listed in rows 6, 7, and 8 are considered similar. The reasons for this include the following: • All of the resource names other than those in the NODE column of rows 2, 6, 7, and 8, start with a letter other than c, hence they are rejected automatically, because the lead restriction is set to 1 character. • The resource name in the NODE column of row 2 fails the similarity threshold of 0.9 because it is very different to the resource names in rows 6, 7, and 8. • The tail restriction is set to 0, so this allows the resource name in row 6 to pass overall similarity, even though the final letters of its resource name are different to the final letters of the resource names in rows 7 and 8.

458 IBM Netcool Operations Insight: Integration Guide Starting the Events Pattern portlet The administrator can start the Events Pattern portlet from a number of locations on the Event Analytics UI.

Before you begin To access the View Related Events, Related Event Details, and Events Pattern portlets, users must be assigned the ncw_analytics_admin role.

About this task You can start the Events Pattern portlet from the View Related Events portlet or the Related Event Details portlet. Starting the Events Pattern portlet directly from the Related Event Details portlet ensures that you do not need to return to the View Related Events portlet to start the Events Pattern portlet after you review the details of a group.

Procedure You can start the Events Pattern portlet from the View Related Events portlet or the Related Event Details portlet. 1. To start the Events Pattern portlet from the View Related Events portlet, start the View Related Events portlet and select one of the following options. For more information about starting the View Related Events portlet, see “Viewing related events” on page 436. a) To create a pattern, complete the following steps. 1) Select a related events group in the Groups table. 2) Right-click the related events group and select Create Pattern. b) To edit an existing pattern, complete the following steps. 1) Select a pattern in the Group Sources table. 2) Right-click the pattern and select Edit Pattern. 2. To start the Events Pattern portlet from the Related Event Details portlet, complete the following steps. a) Start the Related Event Details portlet. For more information, see “Viewing related events details for a seasonal event” on page 436 b) Click Create Pattern. Note: The Events Pattern portlet is updated with each newly selected group.

What to do next Input the details of the pattern in the Events Pattern portlet. For more information about completing the Events Pattern portlet, see “Creating an event pattern” on page 459

Creating an event pattern You can create a pattern based on the automatically discovered groups.

Before you begin To access the View Related Events and Events Pattern portlets, users must be assigned the ncw_analytics_admin role.

About this task A related events configuration automatically discovers groups of events that apply to specific managed resources. You can create an event pattern that is not specific to resources based on an automatically discovered group.

Chapter 7. Administering 459 Procedure 1. Start the Events Pattern portlet for a group. For more information about starting the portlet, see “Starting the Events Pattern portlet” on page 459. 2. Complete the parameter fields in the Pattern Criteria tab of the Events Pattern portlet. Merge into Merge a Related Event Group into an existing pattern or select NONE to create new pattern. To merge a group into a pattern, select from the list of patterns with one or more event types in common. NONE is the default option. Name The name of the pattern. The name must contain alphanumeric characters. Special characters are not permitted. Pattern Filter The ObjectServer SQL filters that are applied to the pattern. This filter is used to restrict the events to which the pattern is applied. For example, enter Summary NOT LIKE '%maintenance%'. Time between first and last event The maximum time that can elapse between the occurrence of the first event and the last event in this pattern, which is measured in minutes. The default value is determined by the Related Events Group on which the pattern is based. Events that occur outside of this time window are not considered part of this group. Trigger Action Select the Trigger Action check box to group the live events when the selected event comes into the ObjectServer. When an event with the selected event type occurs, the grouping is triggered to start. The created grouping includes events that contain all of the selected event types. For example, if the following three event types are part of the pattern criteria, A, B, and C, with only the Trigger Action check box for event C selected, the grouping only occurs when an event with event type C occurs. The grouping contains events that contain all three event types. Note: A group will be triggered even if only one event with the triggering event type occurs. In this case a group will be created in the Event Viewer made up of either of a synthetic and the triggering event as a child event, or of the triggering event as both parent and child event, depending on how you configure the Parent Event tab of the Events Pattern portlet, in step 3. Event Type The event type or types that are included in the pattern. The Event Type is prepopulated with existing event types for the selected pattern, and can be modified. Note: Origin of event type Triangle, circle, and square icons signify where the event types originate from, when a group is merged into an existing pattern. • Triangle: Common to both the existing pattern and the group. • Circle: Part of the group. • Square: Part of the existing pattern. Resource Column(s) The resource or resources to which the action is applied. The Resource Column(s) is prepopulated with existing event type resources for the selected pattern, and can be modified. To modify the selection, click the drop-down list arrow and select one or more columns from the checklist. Name similarity and regular expressions In contrast to exact match, name similarity and regular expressions provide the ability to identify patterns where the names of the resources in the pattern are not exactly the same. • In the case of name similarity, the resources in the pattern must be sufficiently similar to meet the name similarity algorithm criteria. By default name similarity is enabled.

460 IBM Netcool Operations Insight: Integration Guide Table 69. More information on name similarity For more information on... See... How name similarity works “Extending patterns” on page 454 A name similarity example “Examples of name similarity” on page 457 How to configure name similarity “Configuring name similarity” on page 297 • In the case of regular expressions, the resources in the pattern must match the defined regular expression. Multiple resource columns If you specify multiple resource columns, then by default these columns will be combined using OR logic. You can configure whether multiple resource columns should be combined using AND or OR logic. For more information, see “Configuring multiple resource columns” on page 300. • OR logic: correlates two events by resource as soon as the criteria are met for just one pattern resource definition. • AND logic: correlates two events by resource only once criteria are met for all of the pattern resource definitions. Only "Exact match" resource matching is used for AND logic. If you specify AND logic, then you cannot specify regular expression or name similarity as the mechanism for matching the resource information from the multiple selected resource columns. Note: If you configure AND logic, and an event comes in with any one of its multiple resource columns set to NULL, then that event is automatically excluded from pattern processing. Note: Duplicate Event Type and Resource Columns pairs are not permitted. Regular Expression

(Optional) Click the regular expression icon to specify a regular expression to test for matches within unstructured resource information in the selected resource column. To match a string, add the asterisk symbol * before and after the characters. For example, assume you want to match the resource information in the following strings: • "The application abc on myhost.8xyz.com encountered an unrecoverable error." • "acme.75xyz.com down." • "Unrecoverable error on server.111xyz.com." In this case use the following regular expression. Note that the test is binary. The content of the resource column either matches the regular expression, or it does not.

.*[0-9]*xyz.*

Resource names that match the regular expression are identified when the Events Pattern is created. A single group in the Event Viewer is created for all events whose resource columns contain text that matches the regular expression. Note: A regular expression can only be specified under the following conditions: • One column has been selected for the resource. • Multiple columns with OR logic have been selected for the resource. OR logic is the default. A regular expression cannot be specified when multiple columns with AND logic have been selected for the resource.

Chapter 7. Administering 461 For more information about creating and editing regular expressions, see “Applying a regular expression to the pattern criteria” on page 463. 3. In the Parent Event tab of the Events Pattern portlet, select one of the following parent event options. Most Important Event by Type The system checks the events as they occur. The events are ranked based on the order defined in the UI. The highest ranking event is the parent. The parent event changes if a higher ranking event occurs after a lower ranking event. To prevent a dynamically changing parent event, select Synthetic Event. You can manually reorder the ranking by selecting an event and clicking the Move Up and Move Down arrows. Synthetic Event Create an event to act as the parent event or select Use Selected Event as Template to use an existing event as the parent event. To create or modify a synthetic event, populate the following parameter fields, as required. All of the synthetic event fields are optional. Node The managed entity from which the event originated. Displays the managed entity from which the seasonal event originated. Summary The event description. Severity The severity of the event. Select one of the following values from the Severity drop-down list. Critical Major Minor Warning Indeterminate Clear Alert Group The Alert Group to which the event belongs. Add additional fields Select the Add additional fields check box to add more fields to the synthetic parent event. 4. In the Test tab of the Events Pattern portlet, you can run a test to display the existing auto-discovered groups that match the pattern criteria. The test displays the types of events that are matched by the chosen criteria. To run the test, select Run Test. To cancel the test at any time, select Cancel Test. 5. To save, watch, or deploy the pattern, select one of the following options. • Select Save to save the pattern details to the View Related Events New tab. • Select Watch to add the pattern to the View Related Events Watched tab. • Select Deploy to add the pattern to the View Related Events Active tab.

Results The events pattern is created and displayed in the Group Sources table in the View Related Events portlet. Note: • If the patterns display 0 group and 0 events, the pattern creation process might not be finished. To confirm that the process is running, 1. Append the policy name to the policy logger file from the Services tab, Policy Logger service. For more information about configuring the Policy logger, see https://www.ibm.com/support/

462 IBM Netcool Operations Insight: Integration Guide knowledgecenter/SSSHYH_7.1.0.19/com.ibm.netcoolimpact.doc/user/ policy_logger_service_window.html. 2. Check the following log file.

$IMPACT_HOME/logs/_policylogger_PG_ALLOCATE_PATTERNS_GROUPS.log

If the process is not running, see the Event Analytics troubleshooting information. • After creating a new pattern, the allocation of groups to the pattern happens in the background, via a policy. If the new pattern does not have any groups allocated (this is determined by the data set) then the new pattern will be deleted. For more information, see the following technote: http://www.ibm.com/ support/docview.wss?uid=swg22012714. • A pattern will not have any groups allocated under the following conditions: – Name similarity has been switched off. By default it is on. – No regular expressions have been associated with the pattern. – Resource names identified in any potential groups are different.

Applying a regular expression to the pattern criteria Using regular expressions you can test for matches within unstructured resource information in the selected resource column..

Before you begin The Resource field is used for grouping events in the Event Viewer. Event matching occurs in the following order: 1. Exact match 2. Regular expression 3. Name similarity To access the View Related Events and Events Pattern portlets, users must be assigned the ncw_analytics_admin role.

About this task The regular expression is used to match specific information from unstructured data in the selected resource column. Note: A regular expression can only be specified under the following conditions: • One column has been selected for the resource. • Multiple columns with OR logic have been selected for the resource. OR logic is the default. A regular expression cannot be specified when multiple columns with AND logic have been selected for the resource. You can configure whether multiple resource columns should be combined using AND or OR logic. For more information, see “Configuring multiple resource columns” on page 300.

Procedure 1. Start the Events Pattern portlet for a group. For more information about starting the portlet, see “Starting the Events Pattern portlet” on page 459. 2. Proceed as follows:

• To apply a new regular expression, click the regular expression icon in the Pattern Criteria tab of the Events Pattern portlet.

Chapter 7. Administering 463 • To modify an existing regular expression, click the Confirm icon in the Pattern Criteria tab of the Events Pattern portlet. The Regular Expression dialog box is displayed. 3. Insert or edit the regular expression in the Expression field. To match a string, add the asterisk symbol * before and after the characters. For example, assume you want to match the resource information in the following strings: • "The application abc on myhost.8xyz.com encountered an unrecoverable error." • "acme.75xyz.com down." • "Unrecoverable error on server.111xyz.com." In this case use the following regular expression. Note that the test is binary. The content of the resource column either matches the regular expression, or it does not.

.*[0-9]*xyz.*

Resource names that match the regular expression are identified when the Events Pattern is created. A single group in the Event Viewer is created for all events whose resource columns contain text that matches the regular expression. 4. To change or select the event type to which the regular expression is applied, select an event type from the drop-down list in the Test Data field. Note: A regular expression only works on multiple resource fields if the fields are combined using OR logic. If a pattern has two or more event types, and they use more than one resource field, then ensure that OR logic is configured. For more information on how to do this, see “Configuring multiple resource columns” on page 300. 5. To test the regular expression, select Test. The test results are displayed in the Result field. Note: If there are multiple matches for the given regular expression, the matches are displayed in the Result field as a comma-separated list. 6. To save and apply the regular expression, select Save. The Regular Expression dialog box is closed. A confirm symbol is displayed beside the Resource Column.

Viewing related event details in the Events Pattern portlet You can view the related event details for a selected related events group in the Events Pattern portlet, when you create a new pattern for the group.

Before you begin To access the View Related Events and Events Pattern portlets, users must be assigned the ncw_analytics_admin role.

Procedure To view the related event details in the Events Pattern portlet, complete the following steps. 1. Open the View Related Events portlet. 2. Select a related events group in the Groups table. 3. Right-click the related events group and select Create Pattern. The Events Pattern portlet is displayed.

Results The related event details are displayed in the Group instances and Events tables in the Pattern Criteria tab of the Events Pattern portlet. Note: The related event details columns in the Pattern Criteria tab of the Events Pattern portlet matches the Related Event Details portlet columns.

464 IBM Netcool Operations Insight: Integration Guide Suggested patterns Suggested Patterns are automatically created during a Related Events Configuration. With generalization, the association of events is encapsulated by the system as a pattern. Any Suggested Patterns that are generated can be viewed in the Group Sources table of the View Related Events portlet. For more information, see “Viewing related events by group” on page 437. Note: Patterns are not created when the Override global event identity option is selected in the Configure Analytics portlet. Right-click on a suggested pattern in the Group Sources table to display a list of menu items. You can select the following actions from the menu list. Edit Pattern For more information about this action, see “Editing an existing pattern” on page 465. Delete Pattern For more information about this action, see “Deleting an existing pattern” on page 465. Watch For more information about this action, see “Watching a correlation rule” on page 448. Deploy For more information about this action, see “Deploying a correlation rule” on page 449. Archive For more information about this action, see “Archiving related events” on page 445. Copy Choose this action if you want to copy a row, which you can then paste into another document.

Editing an existing pattern You can edit an existing pattern to modify the pattern criteria.

Before you begin To access the View Related Events and Events Pattern portlets, users must be assigned the ncw_analytics_admin role. Note: On rerun of a related events configuration scan, any already existing patterns cannot be edited. Only newly discovered patterns can be edited.

Procedure 1. Start the View Related Events portlet. For more information about starting the View Related Events portlet, see “Viewing related events” on page 436. 2. Select a pattern in the Group Sources table. 3. Right-click the pattern and select Edit Pattern. 4. Modify the parameter fields in the Pattern Criteria and Parent Event tabs. For more information about modifying the parameters, see “Creating an event pattern” on page 459. 5. To save, watch, or deploy the pattern, select one of the following options. • Select Save to save the pattern details to the View Related Events New tab. • Select Watch to add the pattern to the View Related Events Watched tab. • Select Deploy to add the pattern to the View Related EventsActive tab.

Deleting an existing pattern You can delete an existing pattern to remove it from the Group Sources table.

Before you begin To access the View Related Events and Events Pattern portlets, users must be assigned the ncw_analytics_admin role.

Procedure 1. Start the View Related Events portlet. For more information about starting the View Related Events portlet, see “Viewing related events” on page 436. 2. Select the pattern you want to delete in the Group Sources table.

Chapter 7. Administering 465 3. Right-click the pattern and select Delete Pattern. 4. To delete the pattern, select Yes in the confirmation dialog window.

Results The selected pattern is deleted.

Exporting pattern generalization test results to Microsoft Excel You can export pattern generalization test results for a specific configuration to a Microsoft Excel spreadsheet from a supported browser.

Before you begin To access the View Related Events, Related Event Details, and Events Pattern portlets, users must be assigned the ncw_analytics_admin role.

About this task You can start the Events Pattern portlet from the View Related Events portlet or the Related Event Details portlet. Starting the Events Pattern portlet directly from the Related Event Details portlet ensures that you do not need to return to the View Related Events portlet to start the Events Pattern portlet after you review the details of a group.

Procedure To export pattern generalization test results for a specific configuration to a Microsoft Excel spreadsheet, complete the following steps. 1. Open the View Related Events portlet. 2. Select a specific configuration from the configuration table. 3. Enter the pattern criteria and navigate to the Test tab of the Events Pattern portlet and select Run Test. 4. Click the Export Generalization Test Results button in the toolbar. After a short time, the Download export results link displays. 5. Note: The user is restricted to exporting the first 100 groups of the pattern test results to provide a sample of the results in the exported file. Click the link to download and save the Microsoft Excel file.

Results The Microsoft Excel file contains a spreadsheet with the following tabs: • Groups Information: This tab contains the related events groups for the configuration that you selected. • Groups Instances: This tab contains a list of all the related events instances for all of the related events groups for the configuration that you selected. • Group Events: This tab contains a list of all the events that occurred in the related events groups for the configuration that you selected. • Instance Events: This tab contains a list of all the events that occurred in all of the related events instances for all the related events groups for the configuration that you selected. • Export Comments: This tab contains any comments relating to the export for informational purposes (for example, if the spreadsheet headers are truncated, or if the spreadsheet rows are truncated).

466 IBM Netcool Operations Insight: Integration Guide Chapter 8. Operations

As a network operator, you can use Netcool Operations Insight to monitor, troubleshoot, and resolve network alerts, and to manage your network topology. You can also manage run books and automation, and review efficiency across operations teams.

Cloud and hybrid systems Perform the following Netcool Operations Insight tasks on Cloud and Hybrid systems to support your operations processes.

Resolving events Netcool Operations Insight enables you to identify health issues across your application, services, and network infrastructure on a single management console. It provides an event list that brings together event details, event journaling, and event filtering. It also provides the capability to perform troubleshooting, resolution, and run book actions on events directly from the console. Some events in the console are presented in event groups, based on analytics. Using the event list you can explore why these events were grouped together, and this can further help with event resolution.

Monitoring events Use the all new Events page to monitor, investigate, and resolve events.

About event monitoring Event monitoring provides selected application, service, and network event information, together with groups of events brought together by intelligent underlying analytics, to help you see the information you need to perform effective troubleshoot and resolution activities.

About events An event is a record containing structured data summarizing key attributes of an occurrence on a managed entity, which might be a network resource, some part of that resource, or other key element associated with your network, services, or applications. More severe events usually indicate a fault condition in the managed environment, and require human operator or automated intervention. The following table lists typical columns present in a standard event. Note that your administrator might have added custom fields to the event to meet the needs of your organization. Further information on events and event columns can be found in event reference link at the bottom of the page.

© Copyright IBM Corp. 2020, 2020 467 Table 70. Typical event columns Column header Description Sev Indicates the event severity level, which indicates how the perceived capability of the associated managed entity has been affected. By default, there are six severity levels, each indicated by a different colored icon in the event list. The highest severity level is Critical and the lowest severity level is Clear, as shown in the following list:

Critical

Major

Minor

Warning

Indeterminate

Clear

Node Identifies the managed entity from which the event originated. This could be a device or host name, service name, or other entity. Summary Text that describes the alarm condition associated with the event and the affected managed entity. Alert Group Descriptive name of the failure type indicated by the event. By default, this column serves to categorize events by type.

About event groups An event group is a group of two or more events that Netcool Operations Insight has correlated together because the underlying analytics have determined that these events belong together. Events can be added to an event group because of one or more of the factors listed below:

Factor Example Based on event history, A Latency event on a server is regularly followed by a Ping response these events tend to time high event on that same server. These events are grouped into a occur within a short time of each other temporal subgroup. The events occur on If there is a predefined section of the network that groups together a specific resources within a switch and all the nodes that depend on that switch, then any events predefined section of occurring on that specific switch or the nodes connected to it are grouped your network topology together. These events are grouped into a topological subgroup. The events occur on a An administrator defines a scope based on the Node column. Any events that user defined scope match the scope and occur within a default time window are then automatically grouped together. An example would be where an event storm occurs on the london145.acme.com server. All of the events in that storm will be grouped together as they match the scope Node= london145.acme.com, and they occur within the default time window.

These events are grouped into a scope-based subgroup.

468 IBM Netcool Operations Insight: Integration Guide Icons used in the Events page Use this information to understand the purpose of the different icons used in the Events page.

Table 71. Icons used in the Events page Icon Name Description System filter or view Denotes a system filter or view. System filters and views are created and assigned by administrators only, and cannot be modified by operators. This icon always appears to the left of the event filter or view name; for example, the system view that applies analytics grouping to the event data looks like this:

Example_IBM_CloudAnalytics For more information on system filters and views, see the related links at the bottom of this topic.

Global filter or view Denotes a global filter or view. Global filters and views are accessible to all users and can be copied to your user profile and modified there. This icon always appears to the left of the event filter or view name; for example, the default global view looks like this:

Default For more information on global filters and views, see the related links at the bottom of this topic.

Group filter Denotes a group filter. Group filters are accessible to all members of the associated user group. This icon always appears to the left of the event filter name; for example, a filter created specifically for the Network user group might look like this:

Network For more information on group filters, see the related links at the bottom of this topic.

User filter or view Denotes a user filter or view. User filters and views are specific to a particular user; only that user and the administrator can access this category of filter. This icon always appears to the left of the event filter or view name; for example, a view created specifically by the user Annette might look like this:

Annette For more information on user filters and views, see the related links at the bottom of this topic.

Search Click this icon to search for text in any of the columns displayed in the table. Edit filters Click this icon to open a separate browser tab, where you can edit the filters that are applied to the event data. This capability is available to users with administrator privileges only. For more information on editing filters, see the links at the bottom of this page.

Chapter 8. Operations 469 Table 71. Icons used in the Events page (continued) Icon Name Description Edit views Click this icon to open a separate browser tab, where you can edit the views that are applied to the event data. This capability is available to users with administrator privileges only. For more information on editing filters, see the links at the bottom of this page. Correlation Click this icon to display event grouping information. information Temporal group Events with a dot in this column are part of a temporal group. Click any of the dots in this column to highlight the events that are members of the group and to display a side-panel with temporal group details. Scope-based group Events with a dot in this column are part of a scope-based group. Click any of the dots in this column to highlight the events that are members of the group and to display a side-panel with group details. Topological group Events with a dot in this column are part of a topological group. Click any of the dots in this column to highlight the events that are members of the group and to display a side-panel with group details. Filter Click this icon to filter the events in the table based on one of the following: severity, enrichment, or group membership. Seasonal enrichment When you click Filter , one of the enrichment options is to filter based on events that have associated seasonality. This is known as seasonal enrichment and is denoted using the Seasonal icon. Topological enrichment When you click Filter , one of the enrichment options is to filter based events that have associated topology resource information. This is known as topological enrichment and is denoted using the Topological icon. Runbook enrichment When you click Filter , one of the enrichment options is to filter based events that have one or more associated runbooks. This is known as runbook enrichment and is denoted using the Runbook icon. More information Click this icon to display a Help portlet above the table, which provides more information on the Events page. For more information on the Help portlet, see the links at the bottom of this page.

Accessing events Monitor events by accessing the Events page.

About this task If you are an operator, you can set the Events page to be your home page. See the related links at the bottom of this page for more information. The Events page provides a completely new interface for monitoring and managing events. It incorporates new ways to access all the features from the classic Event Viewer available in earlier versions of Netcool Operations Insight and in IBM Netcool/OMNIbus Web GUI, and also includes new event grouping features. If, however, you prefer to work with the classic Event Viewer, you can switch back. See the related links at the bottom of this page for more information.

470 IBM Netcool Operations Insight: Integration Guide Procedure

1. Click the navigation icon at the top-left corner of the screen to go to the main navigation menu.

2. In the main navigation menu, click Events. The Events page is displayed. The event data that you see on this page depends on your datasource, filter, and view settings. For more information on how to change these settings, see the related links at the bottom of this page. Note: In order to see events groups in the event list, you must have a view selected that has the correct relationship assigned. In technical terms the relationship must define a parent-child relationship between the ParentIdentifier and the Identifier columns. By default, the Example_IBM_Cloud_Analytics view is provided with this relationship predefined.

Switching back to the classic Event Viewer The classic Event Viewer is the interface that was available in earlier versions of Netcool Operations Insight and in IBM Netcool/OMNIbus Web GUI. If you prefer to work in the classic Event Viewer then you can switch back.

About this task

Procedure 1. Is there a help portlet above the Events table? The help portlet looks like this:

New Events page, new look, more power More info Dismiss

If the answer is: Then do as follows: Yes, there is a a Go to step 2. help portlet above the Events table No, there is no help portlet above a. Locate the Help icon, at the far right of the Events table toolbar. the Events table b. Click Help , to display the help portlet above the Events table.

2. Click anywhere on the help portlet to open it. 3. In the help portlet, click Switch back.

Changing datasource, filter, and view settings To change the events and event columns that you see in the table on the Events page, change datasource, filter, and view settings.

Procedure 1. To modify the range of event data that you can view in the table, click the Datasource drop-down list and add or remove data sources. You must have at least one data source selected to see data in the table. By default this data source setting is AGG_P, which refers to the primary aggregated ObjectServer. 2. To change the event columns displayed in the table, click the Views drop-down list, which is immediately to the right of the Datasource drop-down list. and select a different view. Changing the view can also change the sort order of the events and event grouping. For more information, see the related link at the bottom of this topic. 3. To change the event rows displayed in the table, click the Filters drop-down list, which is immediately to the right of the Views drop-down list, and select a different filter.

Chapter 8. Operations 471 Changing the filter changes the event rows that are displayed in the table. For example, if you only want to see events from certain network resources and your resource data is held in the Node column, then one way to do this is to select a filter that excludes events where the Node column contains resource values that you do not want to view. Note: In general, changing the filter changes the data that is retrieved from the server; consequently, a filter that matches less events will give better performance.

What to do next To manage the events that you and your operations team can see in the Events page, you can, if you have the correct system permissions, edit filters and views. For more information, see the related link at the bottom of this topic.

Displaying the event side panel Click any event to display a side panel containing more information about that event.

About this task At a minimum, the side panel contains the following sections: Actions Actions that can be performed on the selected event. In the classic Web GUI Event Viewer, these actions were available as right-click tools. For more information, see “Troubleshooting events” on page 477. Information Full data associated with the event, and a timeline containing journal information and user comments. For more information, see “Displaying event details” on page 472. If the event has been enriched with seasonality, topology, or runbook information, then the side panel also includes one or more of the following sections: Seasonality One or more seasonal time windows for the selected event, where a seasonal time window is an hour, day of the week, day of the month, or some other time period when the event tends to occur. For more information, see “Displaying event seasonality” on page 474. Topology An indication that the event can be located on a specific resource in the network topology system. For more information, see “Displaying event topology” on page 475. Runbook An indication that a runbook is associated with the event. For more information, see “Executing a runbook” on page 478. If the event is part of an event group, then, depending on the type of analysis used to generate the group, the side panel also includes one or more of the following sections. For more information, see “Displaying analytics details for an event group” on page 480. Temporal correlation Details of a temporal group in which this event is involved. Scope-based correlation Details of a scope-based group in which this event is involved. Topology correlation Details of a topology group in which this event is involved.

Displaying event details The Events page displays the most important columns associated with an event. Click any event to display a full set of columns for that event in the sidebar.

Procedure 1. Click an event of interest in the table on the Events page..

472 IBM Netcool Operations Insight: Integration Guide A side panel containing multiple information sections opens on the right-hand side of the table.The top section is called the Actions section and displays a set of actions that can be performed on the selected event. 2. Close the Actions section by clicking the upward-pointing chevron at the right of the Actions section header. 3. Open the Information section by clicking the downward-pointing chevron at the right of the Information section header. Information for the selected event is shown in the following tabs: Fields This tab displays the complete set of column data for the selected event, including familiar fields, such as Summary, Node, Severity, and LastOccurrence. Other less familiar fields are also displayed in this tab. For a description of each of these fields, see the related link at the bottom of this topic. Details This tab displays extra data associated with the selected event. If there is no data to display then a No Data message is shown in the tab area. For a description of this extra data, see the related link at the bottom of this topic. Timeline This tab displays the timeline for the selected event. The timeline includes journal entries for the event, and comments added by operators, in chronological order.

Working with the event timeline Click any event to display a timeline for that event in the sidebar. The timeline includes journal entries for the event, and comments added by operators, in chronological order.

Procedure 1. Click an event of interest in the table on the Events page.. A side panel containing multiple information sections opens on the right-hand side of the table.The top section is called the Actions section and displays a set of actions that can be performed on the selected event. 2. Close the Actions section by clicking the upward-pointing chevron at the right of the Actions section header. 3. Open the Information section by clicking the downward-pointing chevron at the right of the Information section header. Information for the selected event is shown in the following tabs: Fields This tab displays the complete set of column data for the selected event, including familiar fields, such as Summary, Node, Severity, and LastOccurrence. Other less familiar fields are also displayed in this tab. For a description of each of these fields, see the related link at the bottom of this topic. Details This tab displays extra data associated with the selected event. If there is no data to display then a No Data message is shown in the tab area. For a description of this extra data, see the related link at the bottom of this topic. Timeline This tab displays the timeline for the selected event. The timeline includes journal entries for the event, and comments added by operators, in chronological order. 4. Click Timeline. The timeline presents a vertical display of journal entries for, and comments on the selected event, in chronological order. 5. Optional: Type a comment and click Add comment at any time to add a comment on this event. Your comment is stored in the timeline in chronological order with the other entries.

Chapter 8. Operations 473 Displaying event seasonality Events can occur within a seasonal time window. For example, an event might tend to occur on a certain day of the week or a certain time of the day. Events that tend to occur within a seasonal time window are highlighted on the Events page using a large dot in the event's Seasonal column. Click the dot to find out when the event tends to occur.

About this task Examples of seasonal time windows include the following: Hour of the day

Between 12:00 and 1:00 pm

Day of the week

On Mondays

Day of the month

On the 3rd of the month

Day of the week at a given hour

On Mondays, between 12:00 and 1:00 pm

Day of the month at a given hour

On the 3rd of the month, between 12:00 and 1:00 pm

Procedure 1. Identify events that are seasonal. • Flat events, that is, events that are not part of an event group, that have associated seasonality, have a large dot immediately visible in their Seasonal column. • Events that are part of an event group that have associated seasonality also have a large dot in their Seasonal column, but you must first open the event group in order to see the event. Open an event group by clicking the Down chevron icon on the left of the parent event group row. 2. Click the large dot in the Seasonal column for the event of interest. A side panel containing multiple information sections opens on the right-hand side of the table.The Seasonality section is open and displays one or more seasonal time windows for the selected event. 3. Click the Down chevron next to each seasonal time window to see a visual representation of the time window. For example, if the underlying algorithm has identified that the selected event tends to occur between 12 noon and 1:00 pm, then this event will have a seasonal time window as follows, and this will be displayed using a clock-based representation:

Between 12:00 and 1:00 pm

4. If you have sufficient user permissions, then you can investigate the event seasonality further. by a) Click the More information link at the bottom of the side panel's Seasonality section. This action displays the Seasonality Details page, which provides a calendar view of all of the historical events that contributed to the selected seasonal event. This page contains the following sections: Seasonal time windows One or more seasonal time windows are listed in the top left pane, together with a colored number square indicating how many historical events occurred within this time window during the historical period analyzed. The color of the square is the color code for the respective

474 IBM Netcool Operations Insight: Integration Guide seasonal time window. Select a seasonal time window to filter the Calendar and the Historical event table to show just historical events that contributed to that time window. Calendar A calendar of the historical period analyzed (the last three months or more) showing days on which the historical events of interest occurred, and colored with the color code for the respective time window. Historical event table Table listing the historical events during the historical period analyzed. b) Click the Events breadcrumb at the top left of the screen to return to the Events page.

Displaying event topology If the resource on which an event occurred can be located in the network topology system, then a large dot is presented in the event's Topology column. Click the large dot to display a topology map for this event, centered on the resource on which the event occurred.

Procedure 1. Identify events that have an associated resource in the network topology. • Flat events that is, events that are not part of an event group, that have an associated resource in the network topology, have a large dot immediately visible in their Topology column. • Events that are part of an event group, that have an associated resource in the network topology also have a large dot in their Topology column, but you must first open the event group in order to see the event. Open an event group by clicking the Down chevron icon on the left of the parent event group row. 2. Click the large dot in the Topology column for the event of interest. A side panel containing multiple information sections opens on the right-hand side of the table.The Topology section is open. The Topology section provides the following information: Resource Name of the resource on which the event of interest occurred. Begin time Date and time when the topology for this resource was last updated. Observed time Date and time when the event of interest was observed on this resource. Topology pane Contains showing a topology map for this event, centered on the resource on which the event occurred. 3. If you have sufficient user permissions, then you can investigate the topology further. a) Click the More information link at the bottom of the side panel's Topology section. This action launches out to the topology management service in a separate browser tab, which displays the same topology map, with the full topology display experience. For more information on how to use the topology management service to explore topology, see the related link at the bottom of this topic. b) Click the Events breadcrumb at the top left of the screen to return to the Events page.

Chapter 8. Operations 475 Searching events You can search the list of events based on data contained within the event columns. For example, type london in the search box to filter the list to display only those events where one or more the event columns contains the string "london".

Procedure

1. In the toolbar above the table, click Search .

An editable search bar is displayed to the left of the Search icon. 2. Click inside the search bar and type your search term. The table now displays only those event rows that meet the criteria specified. For example, assume you want to find events where any of the displayed columns contains the string "london". To do this, type "london" in the search bar. Based on this search, the table will display events such as the following: • Events where the Node column contains resources named server-london-123. • Events where the Summary columns contains text such as:

Memory leak on resource server-london-123

What to do next To remove the search criteria and to display all events again in the table, proceed as follows:

In the search bar, remove the search term; for example, by clicking Close . This removes the filter or search criteria and updates the table to display all events.

Filtering events You can filter the list of events based on severity, enrichment, and grouping.

About this task Filter the events based on one or more of the following factors: • Severity: the severity of the event. • Enrichment: has the event been enriched with seasonality, topology, or runbook data? • Grouping: is the event within a temporal, topological, or scope-based group?

Procedure

1. In the toolbar above the table, click Filter . 2. Apply any of the following filters: Severity Click one or more severity levels to filter on. Enrichment Click one or more of the following:

• Seasonal

• Topological

• Runbook Grouping Click one or more of the following:

476 IBM Netcool Operations Insight: Integration Guide • Scope-based

• Temporal

• Topological 3. Click Apply Filters. The table now displays only those event rows that meet the criteria specified.

What to do next To remove the filter criteria and to display all events again in the table, proceed as follows:

1. In the toolbar above the table, click Filter . 2. Click Reset Filters. This removes the filter or search criteria and updates the table to display all events.

3. Click Filter to close the filter dialog box.

Troubleshooting events You can troubleshoot events by running predefined actions on an event, including administrative actions such as acknowledging an event and creating a ticket based on an event, and information retrieval actions, such as running ping or traceroute commands against the resource on which the event occurred.

Procedure 1. Click an event of interest in the table on the Events page.. A side panel containing multiple information sections opens on the right-hand side of the table.The top section is called the Actions section and displays a set of actions that can be performed on the selected event. If you to perform actions on multiple events in one go, then select multiple events using Shift-Click. 2. In the Actions section, select the action to perform on the event. The actions available are as follows. For more information on each of these troubleshooting actions, see the links at the bottom of the topic. Acknowledge Acknowledge an event when you want to begin to work on that event. You must be the event owner, in order to perform this action. De-acknowledge De-acknowledge an event if you are no longer working on it. You must be the event owner, in order to perform this action. Prioritize Use this command to change the severity of an event. You must be the event owner, in order to perform this action. Suppress/Escalate Suppress an event to remove it from all operator event lists. Escalate an event to promote it to the Escalated event list filter, where it can get attention from a wider range of support people. You must be the event owner, in order to perform these actions. Take ownership Take ownership of an event if you want to work on resolving that event. Once you have ownership of an event, you can perform other actions on it such as Acknowledge, Prioritize, Suppress, Escalate, and Delete. User Assign Use this command to assign an event to another user. That user then becomes the event owner.

Chapter 8. Operations 477 Group Assign Use this command to assign an event to a group. Delete Delete an event to remove it from the events list. You must be the event owner, in order to perform this action. Ping Use this command to run the ping command against the network resource specified in the Node field of the event. Event Search Use this command to perform a historical event search against the selected event. Create ticket Use this command to create a ticket against the selected event.

Executing a runbook A runbook is a set of predefined actions that are meant to resolve a fault condition associated with event. For example, a Memory utilization 100% event might have an associated runbook that automatically restarts the resource associated with that event. If a runbook has been set up for a specific event, then you will see a large dot in the event's Runbook column. Click the dot to open the event sidebar, from where you can launch the runbook.

About this task A runbook can be fully automatic, which means that clicking a button runs all of the predefined actions in order. Alternatively, you might be required to manually run some or all of the steps of the runbook.

Procedure 1. Identify events that have an associated runbook. • Flat events, that is, events that are not part of an event group, that have an associated runbook, have a large dot immediately visible in their Runbook column. • Events that are part of an event group, that have an associated runbook also have a large dot in their Runbook column, but you must first open the event group in order to see the event. Open an event group by clicking the Down chevron icon on the left of the parent event group row. 2. Click the large dot in the Runbook column for the event of interest. A side panel containing multiple information sections opens on the right-hand side of the table.The Runbook section is open. The Runbook section provides the following information: Name Name of the runbook. Description Description of the runbook. Type Type of runbook; options are: • Manual: indicates that the runbook is fully manual or semi-automated, requiring you to perform at least some of the steps manually. • Automated: runbook containing automations only. When you execute the runbook, you must select the automation and then click Run. Rating Average rating out of five provided for this runbook following execution. Success rate Percentage of successful executions of the runbook, based on operator indication following runbook execution.

478 IBM Netcool Operations Insight: Integration Guide 3. Click Execute runbook. This action displays the Runbook page, where you can check the runbook parameters and then either launch the entire runbook, if this is an automated runbook, or execute the runbook step by step, if this is a manual runbook. For more information on how to run a runbook, see the steps 4 to 11 in the related link at the bottom of this topic.

Results Once the runbook has been executed, you are automatically redirected back to the Events page.

Displaying an event group Expand an event group to display the events that have been correlated together within the group.

About this task An event group contains two or more events correlated together by the underlying analytics. The group can include Temporal groups, Topological groups, and Scope-based groups.

Procedure 1. Identify an event group within the table. You can identify a group using the following signs: • It has a Down chevron icon at the far left of its row in the table, immediately left of the severity icon. • By default, it has a summary that includes the words: GROUP (n active events), where n is the number of the events in the group. 2. Open the event group by clicking the Down chevron icon at the far left of its row in the table. The group's events are now displayed under the parent event.

Displaying probable cause for an event group When you expand an event group, you will see the probable cause ratings for each event in the group. The event with the greatest probable cause rating is the most likely cause of these events.

Procedure 1. Open an event group as described in the related link at the bottom of this topic. Probable cause information is displayed in the Probable cause column on the left of the table. Each of the events in the group has a bar in the Probable cause column indicating the percentage probability that this is the probable cause event. The probable cause event has the longest bar and its bar is bright blue. The other events have shorter bars and are shown in a dull blue color. 2. Hover over the bar of the probable cause event to see the percentage probability that this is the root cause event. 3. Click the bar of the probable cause event. A side panel containing multiple information sections opens on the right-hand side of the table.The open section is called the Probable cause section and displays text stating that this event is probable cause. It also provides the following buttons: Reference topology Click here to show the topology that was used to calculate the probable cause values. Add a comment Click here comment on whether you think these probable scores are correct.

Chapter 8. Operations 479 Displaying analytics details for an event group To understand why these events were grouped together, click the option to show group details. This option displays the underlying temporal, topological, and scope-based groups that were brought together to form this event group.

Procedure 1. Open an event group as described in the related link at the bottom of this topic.

2. Click Correlation information . The Grouping side panel opens on the right-hand side of the table. This panel shows why the events in the group are related, by showing the different subgroups that make up the event group. The panel contains three columns, as follows:

Temporal group column Based on event history, the events in this column that are marked with a large dot tend to occur within a short time of each other. Note: Dots in this column that are marked with the same letter correspond to events that are part of the same historical temporal group. Dots that are not marked with a letter correspond to events that were brought into the event group by the temporal pattern analytics algorithm based on common patterns of behavior, and not based on historical occurrences.

Scope-based group column The events in this column that are marked with a large dot occur within a configurable time window on an administrator defined scope, such as a location, service, or resource.

Topological group column The events in this column that are marked with a large dot occur on resources within a predefined section of your network topology. These sub-groups are joined together to form an event group if the same event occurs in two or more sub-groups. In this way multiple sub-groups can be joined together. 3. Click a dot to see more details on any of these sub-groups. Click a link for information on one of these columns:

• Temporal group column

• Scope-based group column

• Topological group column

Temporal group column Clicking a dot in this column opens the sidebar, with the Temporal correlation section open. This section contains the following information, to help you assess the validity of the group. Group details or Pattern Details The title of this tab might be either Group details or Pattern details. • Group details: the tab has this title if the group is purely based on historical co-occurrence of events. • Pattern details the tab has this title if the temporal pattern analytics algorithm has identified patterns of behavior among temporal groups, which are similar, but occur on different resources. For more information on temporal groups and temporal patterns, see the related link at the bottom of this topic. Group details This tab displays details about the selected temporal group.

480 IBM Netcool Operations Insight: Integration Guide First group instance Date and time of first instance of this group. Total group instances Total number of historical instances of this group. For details of when these instances occurred and how many events occurred in each instance, see the Group instance heatmap. Average instance duration Average time in seconds that this group instance lasted. Group instance heatmap Time-based heatmap showing recent historical period in months with a grey square for each day. Each darker square indicates a day on which there was at least one group instance. Hover over the square to see details of this group instance. Pattern details This tab displays details about the temporal pattern associated with the selected group. This tab only appears if the temporal pattern analytics algorithm has identified patterns of behavior among temporal groups, which are similar, but occur on different resources. Total pattern instances Total number of instances of this pattern across two or more temporal groups. For details of when these instances occurred, see the Pattern instance heatmap. Average instance duration Average time in seconds that this pattern instance lasted. Matched resource attributes Resources on which this pattern has been identified. Pattern instance heatmap Time-based heatmap showing: • In a grey square or triangle, the day(s) when the temporal pattern occurred on the resource associated with the events currently selected in the events table. • In a blue square or triangle, the day(s) when the temporal pattern occurred on other resources. Policy details If you have sufficient permissions, then the Policy details tab is displayed. In this tab you can configure the analytics policy that generated this group using the following controls. Any changes that you make here are visible to the administrator in the Policies GUI. For more information about the Policies GUI, see the related link at the bottom of this topic. Status By default the policy is enabled, which means that the policy will continue to group together incoming events. Click the toggle to disable the policy. Disabled policies don't act on incoming events. However, as opposed to rejected policies, disabled policies remain in your administrator's main policy table and can be enabled at the click of a switch. Lock policy? Locked policies continue to act on incoming events. However, the analytics algorithm cannot update a locked policy.

CAUTION: Once a policy has been locked it cannot be unlocked, even by an administrator. The unlock action on a policy will mark it as unlocked in this GUI, and in the Policies GUI, but the policy continues to be locked. Comment Add a comment on this policy. Your administrator will be able to see the comment in the Policies GUI If you have sufficient permissions, then you also see the following options.

Chapter 8. Operations 481 Reject policy If you don't believe that the events in this temporal group or pattern belong together, then you can reject the associated analytics policy. Rejected policies don't act on incoming events. More information Click this link to display the Temporal Details panel, where you can access more details on the historical instances of this group. For more details, see Temporal Details panel.

Scope-based group column Clicking a dot in this column opens the sidebar, with the Scope-based correlation section open. This section contains the following information: Scope Displays the value of the scope parameter ScopeID used to group these events together. This is typically a location, service or resource value. Number of events in group Number of events in the scope-based group; that is, the number of events that have occurred within a defined time window on the location, service or resource value in the Scope field above. Group duration Duration of the scope-based group. Group timeline Shows the start and end times of the scope-based group. Events are marked with a short vertical line along the timeline. Move the blue ball back and forward along the timeline to display elapsed events in the Event table below. Event table Lists the events that make up this scope-based event group. The content of table updates to show elapsed events as you move the blue ball back and forward along the Group timeline.

Topological group column Clicking a dot in this column opens the sidebar, with the Topology correlation section open. This section contains the following information: Topology group name Name of the topology defined in the topology management service, on which this topology group is based. For more information on how topological groups are defined based on defined topologies, see the related link at the bottom of this topic. Topology Pane showing the resources in the topology on which this topology group is based. You can perform the following actions on the topology.

482 IBM Netcool Operations Insight: Integration Guide Table 72. Actions on the topology Item Action Result Resource Hover over Highlights the event(s) on that resource in the events table to the left. Click Displays the relationships between that resource and neighboring resources. The relationships are displayed in text on the lines connecting the resources. Examples of relationships include: runsOn, members, exposes. Right click Displays the following options: Resource details Lists property values for this resource. Comments Provide a comment on this resource here.

Connection Right click Displays the following options: (lines Relationship details connecting Lists property values for this relationship. the resources)

What to do next The Temporal Details panel displays more details on the historical instances of a temporal group. The following information is displayed: Toolbar

Search Searches event data in all event group instances shown on this page.

Views Changes the event columns shown in the Overview timeline, the Event group instance timeline and in theEvent group instance details sections of this page.

Filter Filters the events shown by severity and other column values. Overview timeline Displays event group instances over time and controls the display of event group instance data on the rest of the page. By default the time range sliders are open sufficiently to show data on all event group instances. Modify the time range by either clicking and dragging over the desired range inside the timeline, or by dragging the left and right sliders to the desired range. The rest of the screen updates accordingly. Event group instance timeline Displays all of the events that have historically participated in instances of this temporal event group. To the right of the table is an instance map, providing a graphical view over time of when the various instances have occurred. Event group instance details Displays the following information for each event group instance: Start date and time of event group instance Indicates the first occurrence value of the first event in the event group instance. Distribution of event severity values Pie chart providing a visual indication of the event severity values. Hover over the pie chart for more details.

Chapter 8. Operations 483 Sparkline Chart of event occurrence over time. Duration of event group instance Duration of the event group instance, in text. Down chevron icon Click this the Down chevron icon to see an event table showing column details for each event in this group instance.

Monitoring events using the classic Event Viewer If you prefer to work with the classic Event Viewer, you can do so directly from the Cloud GUI main navigation menu.

Procedure

1. Click the navigation icon at the top-left corner of the screen to go to the main navigation menu.

2. In the main navigation menu, click Netcool Web GUI.

The Netcool Web GUI Event Viewer is displayed in a separate tab. The event data that you see on this page depends on your datasource, filter, and view settings. For more information on how to change these settings, see the related links at the bottom of this page. Note: In order to see events groups in the event list, you must have a view selected that has the correct relationship assigned. In technical terms the relationship must define a parent-child relationship between the ParentIdentifier and the Identifier columns. By default, the Example_IBM_Cloud_Analytics view is provided with this relationship predefined.

What to do next For more information on the classic Event Viewer, see the related link at the bottom of the topic.

Displaying an event group Expand an event group to display the events that have been correlated together within the group.

About this task An event group contains two or more events correlated together by the underlying analytics. The group

can include Temporal groups, Topological groups, and Scope-based groups.

Procedure 1. Identify an event group within the table. You can identify a group using the following signs: • It has a Down chevron icon at the far left of its row in the table, immediately left of the severity icon.

• It has an Investigate link in the Grouping column. • By default, it has a summary that includes the words: GROUP (n active events), where n is the number of the events in the group.

2. Click Investigate to open the event group. A second DASH tab opens containing an events table. The group's events are now displayed under the parent event in that table.

What to do next Display analytics details for the event group as described in the related link at the bottom of the topic.

484 IBM Netcool Operations Insight: Integration Guide Working with topology Use the Topology management capability within Netcool Operations Insight to visualize your topology data. First you define a seed resource on which to base your view, then choose the levels of networked resources around the seed that you wish to display, before rendering the view. You can then further expand or analyze the displayed topology in real time, or compare it to previous versions within a historical time window. The Topology Viewer is the component that visualizes topology data. It has four toolbars and a visualization display area. Navigation toolbar You use the navigation toolbar to select the seed resource, define the number of relationship hops to visualize from the seed resource, and specify the type of relationship hop traversal to make (either host-to-host, or element-to-element). Resource filter toolbar You use the resource filter toolbar to apply entity- or relationship-type filters to the resources displayed in the topology. Visualization toolbar You use the Visualization toolbar to customize the topology view, for example by zooming in and panning. History toolbar You use the History toolbar to compare and contrast a current topology with historical versions. Topology visualization panel You use the Topology visualization panel to view the topology, and access the resource nodes for further analysis performed via the context menu.

Rendering (visualizing) a topology You define the scope of the topology that you want to render by specifying a seed resource, the number of relationship hops surrounding that resource, as well as the types of hops. The topology service then supplies the data required to visualize the topology.

Before you begin To visualize a topology, your topology service must be running, and your Observer jobs must be active.

About this task You use this task to render a topology based on a specified seed resource. Note: The UI has a default timeout set at 30 seconds. If service requests are not received in that time, a timeout message is shown, as in the following example: A time-out has occurred. No response was received from the Proxy Service within 30 seconds. See Topology render timeout for more information on addressing this issue.

Procedure 1. Access the Topology Viewer. The Search page is displayed immediately. From here, you search for a seed resource to build your topology. 2. Find a resource. Search for a resource The seed resource of the topology visualization. You define the seed resource around which a topology view is rendered using the Search for a resource field. As you type in a search term related to the resource that you wish to find, such as name or server, a drop-down list is displayed with suggested search terms that exist in the topology service.

Chapter 8. Operations 485 If the resource that you wish to find is unique and you are confident that it is the first result in the list of search results, then instead of selecting a result from the suggested search terms, you can choose to click the shortcut in the Suggest drop-down, which will render and display the topology for the closest matching resource. If you select one of the suggested results, the Search Results page is displayed listing possible resource results. The Results are listed under separate Resources and Topologies tabs. Defined topology restriction: • Defined topologies must be defined by an administrator user before they are listed. • If you are an administrator defining topology templates in the Topology template builder, search results are listed under separate Resources and Templates tabs. • To add a defined topology search result to the collection of topologies accessible in the Topology Dashboard, tag it as a favorite by selecting the star icon next to it. For each result, the name, type and other properties stored in the Elasticsearch engine are displayed. If a status other than clear exists for a search result, the maximum severity is displayed in the information returned, and a color-coded information bar above each result displays all non-clear statuses (in proportion). You can expand a result in order to query the resource or defined topology further and display more detailed, time-stamped information, such as its state and any associated severity levels, or when the resource was previously updated or replaced (or deleted). You can click the View Topology button next to a result to render the topology. Defined topology restriction: • When you load a predefined topology, it is displayed in a 'defined topology' version of the Topology Viewer, which has restricted functionality. You are unable to follow its neighbors, or change its hops, or make use of its advanced filters. • You can recenter the defined topology from the context menu, which loads it in the Topology Viewer with all its standard functionality. From the Navigation toolbar, perform the following actions: 3. Select a number between one and four to define the number of relationship hops to be visualized. See the Defining global settings topic for more information on customizing the maximum hop count. 4. Choose one of the following hop types: • The Element to Element hop type performs the traversal using all element types in the graph. • The Host to Host hop type uses an aggregate traversal across elements with the entity type 'host'. • The Element to Host hop type provides an aggregated hop view like the 'Host to Host' type, but also includes the elements that are used to connect the hosts. 5. Filter the topology before rendering it. Open the Filter toolbar using the Filter toggle, and apply the filters required. For more information on using filters, see the Filter the topology section in the 'Viewing a topology' topic. 6. Click Render to render the topology.

Results The Topology Viewer connects to the topology service and renders the topology. By default the view is refreshed every thirty seconds, unless specified otherwise (by an administrator user). Trouble: Topology render timeout: If you receive a timeout message, this may be due to a number of reasons: • Large amounts of data being retrieved for complex topologies

486 IBM Netcool Operations Insight: Integration Guide • Too many hop counts specified • Issues with the back-end services Workarounds • Check that all services are running smoothly. You can verify that the docker containers are running using the following command:

$ASM_HOME/bin/docker-compose ps

The system should return text indicating that all containers have a state of Up. • Lower the hop count to reduce the service load. See the Defining global settings topic for more information on customizing the maximum hop count. • An administrator user can increase the default 30 seconds timeout limit by changing the following setting in the application.yml file:

proxyServiceTimeout: 30

You must restart DASH for the new timeout value to take effect: – To stop the DASH server, run /bin/stopServer.sh server1 – Once stopped, start the DASH server: /bin/startServer.sh server1

What to do next Next, you can refine and manipulate the view for further analysis.

Viewing a topology Once you have rendered a topology, you can refine and manipulate the view.

Before you begin To refine a topology, you must have previously defined a topology, as described in the “Rendering (visualizing) a topology” on page 485 topic. Note: You can change a topology if and as required while viewing or refining an existing topology.

About this task You can perform the following actions once you have rendered a topology: View the topology You can zoom in and out of the specific areas of the topology, and pan across it in various ways. You can also auto-fit the topology into the available display window, draw a mini map, or redraw the entire topology. Use the Update Manager With auto-updates turned off, you can work with your current topology until you are ready to integrate the new resources into the view. Filter resources You can filter the types of resources displayed, or the types of relationships rendered.

Procedure View a topology (created earlier) 1. From the Visualization toolbar below the Navigation toolbar, you can manipulate the topology using a number of visualization tools. Select tool submenu When you hover over the Select tool icon, a submenu is displayed from which you can choose the Select, Pan or Zoom Select tool.

Chapter 8. Operations 487 Select tool Use this icon to select individual resources using a mouse click, or to select groups of resources by creating a selection area (using click-and-drag). Pan tool Use this icon to pan across the topology using click-and-drag on a blank area of the visualization panel. Zoom Select tool Use this icon to zoom in on an area of the topology using click-and-drag. Zoom In Use this icon to zoom in on the displayed topology. Zoom Out Use this icon to zoom out of the displayed topology. Zoom Fit Use this icon to fit the entire topology in the current view panel. Overview Toggle Use this icon to create the overview mini map in the bottom right corner. The mini map provides an overview of the entire topology while you zoom in or out of the main topology. The mini map displays a red rectangle to represent the current topology view. Layout Use this icon to recalculate, and then render the topology layout again. You can choose from a number of layout types and orientations. Layout 1 A layout that simply displays all resources in a topology without applying a specific layout structure. Layout 2 A circular layout that is useful when you want to arrange a number of entities by type in a circular pattern. Layout 3 A grouped layout is useful when you have many linked entities, as it helps you visualize the entities to which a number of other entities are linked. This layout helps to identify groups of interconnected entities and the relationships between them. Layout 4 A hierarchical layout that is useful for topologies that contain hierarchical structures, as it shows how key vertices relate to others with peers in the topology being aligned. Layout 5 A peacock layout is useful when you have many interlinked vertices, which group the other linked vertices. Layout 6 A planar rank layout is useful when you want to view how the topology relates to a given vertex in terms of its rank, and also how vertices are layered relative to one another. Layout 7 A rank layout is useful when you want to see how a selected vertex and the vertices immediately related to it rank relative to the remainder of the topology (up to the specified amount of hops). The root selection is automatic. For example, vertices with high degrees of connectivity outrank lower degrees of connectivity. This layout ranks the topology automatically around the specified seed vertex. Layout 8 A root rank layout similar to layout 7, except that it treats the selected vertex as the root. This layout is useful when you want to treat a selected vertex as the root of the tree, with others being ranked below it. Ranks the topology using the selected vertex as the root (root selection: Selection)

488 IBM Netcool Operations Insight: Integration Guide Layout orientation For layouts 4, 6, 7 and 8, you can set the following layout orientations: • Top to bottom • Bottom to top • Left to right • Right to left History toggle Use this to open and close the Topology History toolbar. The topology is displayed in history mode by default. Configure Refresh Rate When you hover over the Refresh Rate icon, a submenu is displayed from which you can configure the auto-update refresh rate. You can pause the topology data refresh, or specify the following values: 10 seconds, thirty seconds (default), one minute, or five minutes. Resource display conventions Deleted: A minus icon shows that a resource has been deleted since last rendered. Displayed when a topology is updated, and in the history views. Added: A purple plus (+) icon shows that a resource has been added since last rendered. Displayed when a topology is updated, and in the history views. Added (neighbors): A blue asterisk icon shows that a resource has been added using the 'get neighbors' function. Use the Update Manager 2. If auto-updates have been turned off, the Update Manager informs you if new resources have been detected. It allows you to continue working with your current topology until you are ready to integrate the new resources into the view. The Update Manager is displayed in the bottom right of the screen. Show details Displays additional resource information. Render Integrates the new resources into the topology. Choosing this option will recalculate the topology layout based on your current display settings, and may therefore adjust the displayed topology significantly. Cogwheel icon When clicked, provides you with quick access to change your user preferences: • Enable auto-refresh: Switches auto-refresh back on, and disables the Update Manager. • Remove deleted resources: Removes the deleted resources from your topology view when the next topology update occurs. Hide Reduces the Update Manager to a small purple icon that does not obstruct your current topology view. When you are ready to deal with the new resources, click on the icon to display the Update Manager again. Modify a topology 3. The displayed topology consists of resource nodes and the relationship links connecting the resources. You can interact with these nodes and links using the mouse functionality. Dragging a node Click and drag a node to move it.

Chapter 8. Operations 489 Selecting a node Selection of a node highlights the node, and emphasizes its first-order connections by fading all other resources. Context menu (right-click) You open the context menu using the right-click function. The context menu provides access to the resource-specific actions you can perform. For resource entities, you can perform the following: Resource Details When selected, displays a dialog that shows all the current stored properties for the specified resource in tabular and raw format. When selected while viewing a topology history with Delta mode On, the properties of the resource at both the reference time and at the delta time are displayed. Resource Status If statuses related to a specific resource are available, the resource will be marked with an icon depicting the status severity level, and the Resource Status option will appear in the resource context menu. When selected, Resource Status displays a dialog that shows the time-stamped statuses related to the specified resource in table format. The Severity and Time columns can be sorted, and the moment that Resource Status was selected is also time-stamped. In addition, if any status tools have been defined, the status tool selector (three dots) is displayed next to the resource's statuses. Click the status tool selector to display a list of any status tools that have been defined, and then click the specific tool to run it. Status tools are only displayed for the states that were specified when the tools were defined. The severity of a status ranges from 'clear' (white tick on a green square) to 'critical' (white cross on a red circle).

Table 73. Severity levels Icon Severity

clear

indeterminate

information

warning

minor

major

critical

Comments When selected, this displays any comments recorded against the resource. By default, resource comments are displayed by date in ascending order. You can sort them in the following way: • Oldest first • Newest first • User Id (A to Z) • User Id (Z to A)

490 IBM Netcool Operations Insight: Integration Guide Users with the inasm_operator role can view comments, but cannot add any comments. Users with inasm_editor or inasm_admin roles can also add new comments. See https:// www-03preprod.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/Installing/ t_asm_configuring.html for more information on assigning user roles. To add a new comment, enter text into the New Comment field, and then click Add Comment to save. Get Neighbors When selected, opens a menu that displays the resource types of all the neighboring resources. Each resource type lists the number of resources of that type, as well as the maximum severity associated with each type. You can choose to get all neighbors of the selected resource, or only the neighbors of a specific type. This lets you expand the topology in controlled, incremental steps. Selecting Get Neighbors overrides any existing filters. You can Undo the last neighbor request made. Follow Relationship When selected, opens a menu that displays all adjacent relationship types. Each relationship type lists the number of relationships of that type, as well as the maximum severity associated with each type. You can choose to follow all relationships, or only the neighbors of a specific type. Show last change in timeline When selected, will display the history timeline depicting the most recent change made to the resource. Show first change in timeline When selected, will display the history timeline depicting the first change made to the resource. Recenter View When selected, this updates the displayed topology with the specified resource as seed. Filter the topology 4. Open the Resource Filter toolbar using the Filter toggle in the Topology Visualization toolbar. From here, you can apply filters to the topology in order to refine the types of resources or relationships displayed. The Filter toolbar is displayed as a panel on the right-hand side of the page, and consists of a Simple and an Advanced tab. If selected, each tab provides you with access to lists of Resource types and Relationship types. Only types relevant to your topology are displayed, for example host, ipaddress or operatingsystem, although you can use the Show all types toggle to view all of them. Simple tab When you use the Simple tab to filter out resource or relationship types, all specified types are removed from view, including the seed resource. It only removes the resources matching that type, leaving the resources below, or further out from that type, based on topology traversals. By default, all types are On. Use the Off toggle to remove specific types from your view. Advanced tab The Advanced tab performs a server-side topology-based filter action. It removes the resources matching that type, as well as all resources below that type. However, the seed resource is not removed from view, even if it is of a type selected for removal. Tips Reset or invert all filters: Click Reset to switch all types back on, or click Invert to invert your selection of types filtered. Hover to highlight: When a topology is displayed, hover over one of the filtering type options to highlight them in the topology.

Chapter 8. Operations 491 Viewing topology history You can view a topology dynamically, or use the history timeline function to compare and contrast the current topology with historical versions.

Before you begin To refine a topology, you must have previously defined a topology, as described in the “Rendering (visualizing) a topology” on page 485 topic. Note: You can change a topology if and as required while viewing or refining an existing topology.

About this task Tip: The topology is displayed in history mode by default.

Procedure 1. Open the Topology History toolbar by clicking the History toggle in the Topology Visualization toolbar (on the left). 2. You can display and refine topology history in a number of ways. Update mode The topology is displayed in update mode by default with Delta mode set to Off. While viewing the timeline in update mode with Delta mode set to On, any changes to the topology history are displayed on the right hand side of the timeline, with the time pins moving apart at set intervals. By clicking Render, you reset the endpoint to 'now' and the pins form a single line again. While viewing the timeline in update mode with Delta mode set to Off, only a single pin is displayed. Delta mode You toggle between delta mode On and Off using the Delta switch above the topology. When Delta mode is On with Update mode also On, differences in topology are displayed via purple plus or minus symbols next to the affected resource. When Delta mode is On with History mode On (that is, Update mode set to Off), you can compare two time points to view differences in topology. Historical change indicators (blue dots) are displayed next to each affected resource. Note: For efficiency reasons, historical change indicators are only displayed for topologies with fifty or fewer resources. You can reduce (but not increase) this default by changing the Historical Change Threshold as described in Defining global settings. Lock time pin Click the Lock icon on a time pin's head to lock a time point in place as a reference point, and then use the second time slider to view topology changes. Compare resource properties Click Resource Properties on a resource's context menu to compare the resource's data at the two selected time points. You can view and compare the resource's property names and values in table format, or raw JSON format. History timeline You open the Topology History toolbar using the History toggle in the Topology Visualization toolbar (on the left). You use the time pins to control the topology shown. When you move the pins, the topology updates to show the topology representation at that time. While in delta mode you can move both pins to show a comparison between the earliest pin and the latest. The timeline shows the historic changes for a single selected resource, which is indicated in the timeline title. You can lock one of the time pins in place to be a reference point.

492 IBM Netcool Operations Insight: Integration Guide When you first display the history timeline, coach marks (or tooltips) are displayed, which contain helpful information about the timeline functionality. You can scroll through these, or switch them off (or on again) as required. To view the timeline for a different resource, you click on it, and the heading above the timeline changes to display the name of the selected resource. If you click on the heading, the topology centers (and zooms into) the selected resource. The history timeline is displayed above a secondary time bar, which displays a larger time segment and indicates how much of it is depicted in the main timeline. You can use the jump buttons to move back and forth along the timeline, or jump to the current time. You can use the time picker, which opens a calendar and clock, to move to a specific second in time. To view changes made during a specific time period, use the two time sliders to set the time period. You can zoom in and out to increase or decrease the granularity using the + and - buttons on the right, or by double-clicking within a time frame. The most granular level you can display is an interval of one second. The granularity is depicted with time indicators and parallel bars, which form 'buckets' that contain the recorded resource change event details. The timeline displays changes to a resource's state, properties, and its relationships with other resources. These changes are displayed through color-coded bars and dash lines, and are elaborated on in a tooltip displayed when you hover over the change. You can exclude one or more of these from display. Resource state changes The timeline displays the number of state changes a resource has undergone. Resource property changes The timeline displays the number of times that resource properties were changed. Each time that property changes were made is displayed as one property change event regardless of whether one or more properties were changed at the time. Resource relationship changes The number of relationships with neighboring resources are displayed, and whether these were changed. The timeline displays when relationships with other resources were changed, and also whether these changes were the removal or addition of a relationship, or the modification of an existing relationship.

Rebuilding a topology Once you have rendered a topology, you can search for (or define) a new seed resource and build a topology around it, change the number of hops rendered, and switch between element-to-element, host- to-host and element-to-host hop types.

Before you begin To refine a topology, you must have previously defined a topology, as described in the “Rendering (visualizing) a topology” on page 485 topic. Note: You can change a topology if and as required while viewing or refining an existing topology.

About this task Tip: For information on re-indexing the Search service, see the 'Re-indexing Search' information in the task troubleshooting section of this topic.

Procedure From the Navigation toolbar, you can again search for a resource around which to build a topology, change the number of hops and the type of hop, and re-render the topology.

Chapter 8. Operations 493 Topology Search If you conduct a resource search from the navigation toolbar with a topology already loaded, the search functionality searches the loaded topology as well as the topology database. As you type in a search term, a drop-down list is displayed that includes suggested search results from the displayed topology listed under the In current view heading. If you hover over a search result in this section, the resource is highlighted in the topology window. If you click on a search result, the topology view zooms in on that resource and closes the search. No. Hops The number of relationship hops to visualize from the seed resource, with the default set at 'one'. You define the number of relationship hops to be performed, which can be from one to four, unless this setting has been customized. See the Defining global settings topic for more information on customizing the maximum hop count. Type of Hop The type of graph traversal used. The options are: Element to Element hop type This type performs the traversal using all element types in the graph. Host to Host hop type This type generates a view showing host to host connections. Element to Host hop type This type provides an aggregated hop view like the Host to Host type, but also includes the elements that are used to connect the hosts. Tip: The URL captures the hopType as 'e2h'. When launching a view using a direct URL, you can use the hopType=e2h URL parameter. Filter toggle Use this icon to display or hide the filter toolbar. You can filter resources that are displayed in the topology, or set filters before rendering a topology to prevent a large, resource-intensive topology from being loaded. If a filter has been applied to a displayed topology, the text 'Filtering applied' is displayed in the status bar at the bottom of the topology. Render This performs the topology visualization action, rendering the topology based on the settings in the navigation toolbar. Once rendered, the topology will refresh on a 30 second interval by default. You can pause the auto- update refresh, or select a custom interval. Tip: The UI can time out if a large amount of data is being received. See the timeout troubleshooting section in the following topic for information on how to address this issue, if a timeout message is displayed: “Rendering (visualizing) a topology” on page 485

Performing topology administration From the Topology Viewer, you can obtain direct-launch URLs, perform a system health check, and set user preferences.

Before you begin Access the Topology Viewer.

About this task You can perform the following admin actions: Share direct launch URL You can copy and save a URL to quickly access a currently defined topology view.

494 IBM Netcool Operations Insight: Integration Guide Export a topology snapshot You can share a snapshot of a topology in either PNG or SVG format. View system health You can view your system's health. Set user preferences You can set user preferences that define the default settings for rendering your topology.

Procedure You perform the following actions from the Navigation bar > Additional actions or Navigation bar > Sharing options menus. Sharing options You can share a topology either by obtaining a direct URL linking to the topology view, or by exporting a view of the topology as an image. Obtain Direct URL Open the Sharing options drop-down menu, and then use the Obtain Direct URL option to display the Direct Topology URL dialog. The displayed URL captures the current topology configuration, including layout type (layout orientation is not tracked). Click Copy to obtain a direct-launch URL string, then click Close to return to the previous screen. Use the direct-launch URL for quick access to a given topology view within DASH. Tip: You can share this URL with all DASH users with the required permissions. Export as PNG / SVG You can share a snapshot of a topology in either PNG or SVG format, for example with someone who does not have DASH access. Open the Sharing options drop-down menu, and then use either the Export as PNG or the Export as SVG option. Specify a name and location, then click Save to create a snapshot of your topology view. You can now share the image as required. Additional actions > View System Health Open the Additional actions drop-down menu, and then use the View System Health option to access your Topology management deployment's system health information. Additional actions > Edit User Preferences Open the Additional actions drop-down menu, and then use the Edit User Preferences option to access the User Preferences window. Click Save, then Close when done. You can customize the following user preferences to suit your requirements: Updates Default auto refresh rate (seconds) The rate at which the topology will be updated. The default value is 30. You must reopen the page before any changes to this user preference take effect. Maximum number of resources to load with auto refresh enabled When the resource limit set here is reached, auto-refresh is turned off. The maximum value is 2000, and the default is set to 500. Tip: If you find that the default value is too high and negatively impacts your topology viewer's performance, reduce this value. Auto render new resources Enable this option to display new resources at the next scheduled or ad-hoc refresh as soon as they are detected.

Chapter 8. Operations 495 Remove deleted topology resources Enable this option to remove deleted resources at the next scheduled or ad-hoc refresh. Layout Set Default layout type including the layout orientation for some of the layout types. You can also configure a default layout in User Preferences. You can choose from a number of layout types, and also set the orientation for layouts 4, 6, 7 and 8. Tip: A change to a layout type is tracked in the URL (layout orientation is not tracked). You can manually edit your URL to change the layout type display settings. The following numbered layout types are available: Layout 1 A layout that simply displays all resources in a topology without applying a specific layout structure. Layout 2 A circular layout that is useful when you want to arrange a number of entities by type in a circular pattern. Layout 3 A grouped layout is useful when you have many linked entities, as it helps you visualize the entities to which a number of other entities are linked. This layout helps to identify groups of interconnected entities and the relationships between them. Layout 4 A hierarchical layout that is useful for topologies that contain hierarchical structures, as it shows how key vertices relate to others with peers in the topology being aligned. Layout 5 A force-directed (or 'peacock') layout is useful when you have many interlinked vertices, which group the other linked vertices. Layout 6 A planar rank layout is useful when you want to view how the topology relates to a given vertex in terms of its rank, and also how vertices are layered relative to one another. Layout 7 A rank layout is useful when you want to see how a selected vertex and the vertices immediately related to it rank relative to the remainder of the topology (up to the specified amount of hops). The root selection is automatic. For example, vertices with high degrees of connectivity outrank lower degrees of connectivity. This layout ranks the topology automatically around the specified seed vertex. Layout 8 A root rank layout similar to layout 7, except that it treats the selected vertex as the root. This layout is useful when you want to treat a selected vertex as the root of the tree, with others being ranked below it. Ranks the topology using the selected vertex as the root (root selection: Selection) Layout orientation For layouts 4, 6, 7 and 8, you can set the following layout orientations: • Top to bottom • Bottom to top • Left to right • Right to left Misc Information message auto hide timeout (seconds) The number of seconds that information messages are shown for in the UI.

496 IBM Netcool Operations Insight: Integration Guide The default value is 3. Tip: If you are using a screen reader, it may be helpful to increase this value to ensure that you do not miss the message. Screen reader support for graphical topology You can enable the display of additional Help text on screen elements, which can improve the usability of screen readers. You must reopen the page before any changes to this user preference take effect. Enhanced client side logging, for problem diagnosis If enabled, additional debug output is generated, which you can use for defect isolation. Tip: Use this for specific defect hunting tasks, and then disable it again. If left enabled, it can reduce the topology viewer's performance. You must reopen the page before any changes to this user preference take effect.

Using the topology dashboard You can use the Topology Dashboard to tag, view and access your most commonly used 'favorite' defined topologies.

About this task The Topology Dashboard presents a single view of all defined topologies that have been tagged as favorites. They are displayed as a collection of circles, each surrounded by a color-coded band that indicates the states of all the constituent resources in proportional segments. In addition, each defined topology may display an icon, if assigned by the owner. From here, you can access each defined topology for further investigation or action.

Procedure Tagging a defined topology as a favorite 1. As the admin user, log into your DASH web application, then select Administration from the DASH menu. 2. Select Topology Dashboard under the Agile Service Management heading. All defined topologies tagged as 'favorites' are displayed. 3. To add defined topologies as favorites, enter a search term in the Search for a defined topology field. The Search Results page is displayed listing possible results. 4. To save a topology as a favorite, click the star icon next to it on theSearch Results page. You can remove the favorite tag by deselecting the star again. Using the Topology Dashboard to view favorites 5. View defined topology information on the Topology Dashboard. Option Description At a If you hover over a specific defined topology, an extended tooltip is displayed listing the glance number of resources against their states, which is also proportionally represented by the color-coded band.

More If you click a defined topology, it is displayed in the bottom half of the screen, and the details window with the defined topology favorites is moved to the top half of the screen. For the displayed topology: • The state of each resource or relationship is displayed in color, and you can use the context (right-click) menu to obtain further information. • In the upper 'favorites' display window, all defined topologies that intersect with the selected topology remain in focus, while the others are grayed out.

Chapter 8. Operations 497 Option Description

• If you select a specific resource in your displayed topology, only the displayed favorites that contain the resource remain in focus. You can remove the favorite tag by deselecting the star displayed in the top right corner of the displayed topology.

Searching for and viewing defined topologies (or resources) is described in more detail in the Search section of the Topology Viewer reference topic, and in the “Working with topology” on page 485 topics. Remember: Defined topologies are updated automatically when the topology database is updated, and generated dynamically as they are displayed in the Topology Viewer. Related information

Monitoring operational efficiency Use the Event Reduction dashboard to monitor all event occurrences and see the percentage of event reduction that is applied.

About this task The Total event occurrences pane displays the total number of raw events that occurred in the specified time range (the default selection is the last 6 hours). Reduction shows the percentage decrease in events, through deduplication and grouping, in the selected timeframe. Peak Reduction is the percentage decrease in events, through deduplication and grouping, in the 5 minute interval with the highest decrease. Remaining events is the number of remaining events after deduplication and grouping in the selected timeframe. It equals the number of grouped events + the number of ungrouped events. The timeline of Incoming events over time is a graph that displays the total incoming event occurrences and remaining events, per 5 minute interval.

Figure 16. Event Reduction dashboard

Procedure

• To open the Event Reduction dashboard, click Dashboards > Event reduction.

498 IBM Netcool Operations Insight: Integration Guide • Click Cycle view mode to rotate the display of the dashboard in different view modes. If the monitor icon is not visible, press the Escape key to exit Kiosk mode. Note: To create a link or snapshot, or export the panel, access the Share dashboard. Select Cycle view mode and press the Escape key, then select the Share dashboard icon. • The default time range for event occurrences is the last 6 hours. Perform any of the following actions to apply a different time range for incoming events:

Last 6 hours – Click Time range and enter an absolute time range, or select a relative time range from the list provided. – Move your mouse over the timeline graph to view a tooltip of the number of the Total event occurrences for that point in time. Click and drag the cursor along the timeline and release it at the desired time period on the graph.

• To refresh the dashboard, click Refresh . • To set an automatic refresh interval for the dashboard, click Refresh time interval and select an interval. • You can switch between displaying the Total event occurrences and Remaining events data on the graph by clicking either legend under the x-axis.

On-premises systems Perform the following Netcool Operations Insight tasks on on-premises systems to support your operations processes.

Managing events with IBM Netcool/OMNIbus Web GUI Use the Netcool/OMNIbus Web GUI to monitor, investigative, and resolve events.

About this task For information on how to use Web GUI, see the related link at the bottom of this topic.

Using Event Search The event search tools can find the root cause of problems that are generating large numbers of events in your environment. The tools can detect patterns in the event data that, for example, can identify the root cause events that cause event storms. They can save you time that would otherwise be spent manually looking for the event that is causing problems. You can quickly pinpoint the most important events and issues. The tools are built into the Web GUI event lists (AEL and Event Viewer). They run searches against the event data, based on default criteria, filtered over specific time periods. You can search against large numbers of events. You can change the search criteria and specify different time filters. When run, the tools start the Operations Analytics - Log Analysis product, where the search results are displayed.

Before you begin • Set up the environment for event search. See “Configuring integration with Operations Analytics - Log Analysis” on page 302. • Familiarize yourself with the Operations Analytics - Log Analysis search workspace. – If you are using V1.3.5, then see http://www-01.ibm.com/support/knowledgecenter/SSPFMY_1.3.5/ com.ibm.scala.doc/use/iwa_using_ovw.html . – If you are using V1.3.3, then see http://www-01.ibm.com/support/knowledgecenter/SSPFMY_1.3.3/ com.ibm.scala.doc/use/iwa_using_ovw.html . • To understand the event fields that are indexed for use in event searches, familiarize yourself with the ObjectServer alerts.status table. See http://www-01.ibm.com/support/knowledgecenter/

Chapter 8. Operations 499 SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/common/reference/ omn_ref_tab_alertsstatus.html .

Procedure • To start using the event search tools, select one or more events from an event list and right-click. From the right-click menu, click Event Search, click a tool, and click a time filter. The tools are as follows:

Tool Description Show event Searches for all events that originate from the same host name, service name, dashboard by node or IP address, which is equivalent to the Node field of the ObjectServer alerts.status table. Search for similar Searches for all events that have the same failure type, type, and severity as events the selected events. The failure type equates to the AlertGroup field of the alerts.status table. The type equates to the Type field. The severity equates to the Severity field. Search for events Searches for all events that originate from the same source, that is, host name, by node service name, or IP address. This is equivalent to the Node field of the alerts.status table. The results are displayed in a list in in the Operations Analytics - Log Analysis GUI. Show keywords Extracts a list of keywords from the text of the event summary, event source, and event count and failure type. The event summary text equates to the Summary field of the alerts.status table. The event source equates to the Node field. The failure type equates to the AlertGroup field.

The time filters are calculated from the time stamp of the selected event or events. The Operations Analytics - Log Analysis time stamp is equivalent to the FirstOccurrence field of the ObjectServer alerts.status table. The default time filters are as follows. If you click Custom specify an integer and unit of time, such as 15 weeks. – 15 minutes before event – 1 hour before event – 1 day before event – 1 week before event – 1 month before event – 1 year before event – Custom ... If a single event is selected that has the time stamp 8 January 2014 08:15:26 AM, and you click Search for events by node > 1 hour before event, the result is filtered on the following time range: (8 January 2014 07:15:26 AM) to (8 January 2014 08:15:26 AM). If multiple events are selected, the time filter is applied from the earliest to the most recent time stamp. For three events that have the time stamps 1 January 2014 8:28:46 AM, 7 January 2014 8:23:20 AM, and 8 January 2014 8:15:26 AM, the Search for events by node > 1 week before event, returns matching events in the following time range: (25 December 2013 08:28:46 AM) to (08 January 2014 08:15:26 AM). Restriction: The Web GUI and Operations Analytics - Log Analysis process time stamps differently. The Web GUI recognizes hours, minutes, and seconds but Operations Analytics - Log Analysis ignores seconds. This problem affects the Show event dashboard by node and Search for events by node. If the time stamp 8 January 2014 07:15:26 AM is passed, Operations Analytics - Log Analysis interprets this time stamp as 8 January 2014 07:15 AM. So, the results of subsequent searches might differ from the search that was originally run.

500 IBM Netcool Operations Insight: Integration Guide The results are displayed differently depending on the tool. The time filter has no effect on how the results are displayed.

Tool How search results are displayed Show event A dashboard is opened for the OMNIbus Static Dashboard custom app that dashboard by shows the following information about the distribution of the matching events: node – Event Trend by Severity – Event Storm by AlertGroup – Event Storm by Node – Hotspot by Node and AlertGroup – Severity Distribution – Top 5 AlertGroups Distribution – Top 5 Nodes Distribution – Hotspot by AlertGroup and Severity For more information about the OMNIbus Static Dashboard custom app, see “Netcool/OMNIbus Insight Pack” on page 311 .

Search for The results are displayed in the search timeline, which shows the distribution of similar events matching events over the specified time period. Below the timeline, the list of and results is displayed. Click Table View or List View to change how the results are Search for formatted. Click > or < to move forward and back in the pages of results. events by node Keywords that occur multiple times in the search results are displayed in the Common Patterns area of the navigation pane, with the number of occurrences in parentheses ().

Show keywords The keywords are displayed in the Configured Patterns area of the Operations and event count Analytics - Log Analysis GUI. Each occurrence of the keyword over the time period is counted and displayed in parentheses () next to the keyword.

• After the results are displayed, you can refine them by performing further searches on the results in the search workspace. For example, click a keyword from the Configured Patterns list to add it to the Search field. Important: Because of the difference in handling seconds between the two products, if you run a further search against the keyword counts that result from the Show keywords and event count tool, you might see a difference in the count that was returned for a keyword under Configured Patterns and in the search that you run in the search workspace. Above the Search field, a sequence of breadcrumbs is displayed to indicate the progression of your search. Click any of the breadcrumb items to return the results of that search.

Example The Show keywords and event count tool can examine what happened before a problematic event in your environment. Assume that high numbers of critical events are being generated in an event storm. A possible work flow is as follows: • You select a number of critical events and click Event search > Show keywords and event count > 1 hour before event so that you can identify any similarities between critical events that occurred in the last hour. • The most recent time stamp (FirstOccurrence) of an event is 1 January 2014 8:28:00 AM. In the Operations Analytics - Log Analysis GUI, the search results show all keywords from the Summary, Node, and AlertGroup fields and the number of occurrences.

Chapter 8. Operations 501 • You notice that the string "swt0001", which is the host name of a switch in your environment, has a high number of occurrences. You click swt0001 and run a further search, which reduces the number of results to only the events that contain "swt0001". • From this pared-down results list, you quickly notice that one event shows that switch is misconfigured, and that this problem is causing problems downstream in the environment. You can then return to the event list in the Web GUI and take action against this single event.

What to do next Perform the actions that are appropriate for your environment against the events that are identified by the searches. See http://www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_use_jsel_manageevents.html? for the Event Viewer and http://www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_use_ael_managingevents.html? for the AEL. Related concepts Operations Management tasks Use this information to understand the tasks that users can perform using Operations Management.

Event search workflow for operators A typical workflow to show operators how the event search tools can assist triaging and diagnostics from the event list. Assume the following situation: An event storm has been triggered but the cause of the storm is unclear. For the past hour, large numbers of critical events have been generated. Run the event search tools against the critical events. 1. To gain an overview of what has happened since the event storm started, select the critical events. Then, right-click and click Event search > Show event dashboard by node > 1 hour before event. The charts that are displayed show how the critical events break down, by node, alert group, severity, and so on. 2. Check whether any nodes stand out on the charts. If so, close the Operations Analytics - Log Analysis GUI, return to the event list and find an event that originates on that node. For example, type a filter in the text box on the Event Viewer toolbar like the following example that filters on critical events from the mynode node.

SELECT * from alerts.status where Node = mynode; and Severity = 5;

After the event list refreshes to show only matching events, select an event, right-click, and click Event search > Search for events by node > 1 hour before event. 3. In the search results, check whether an event from that node stands out. If so, close the Operations Analytics - Log Analysis GUI, return to the event list, locate the event, for example, by filtering on the summary or serial number:

SELECT * from alerts.status where Node = mynode; and Summary like “Link Down ( FastEthernet0/13 )”;

SELECT * from alerts.status where Node = mynode; and Serial = 4586967;

Action the event. 4. If nothing stands out that identifies the cause of the event storm, close the Operations Analytics - Log Analysis GUI and return to the event list. Select all the critical events again and click Event search > Show keywords and event count > 1 hour before event. 5. From the results, look in the Common Patterns area on the navigation pane. Looks for keywords that are non generic but have a high occurrence, for instance host name or IP addresses. 6. Refine the search results by clicking relevant keywords to copy them to the Search field and running the search. All events in which the keyword occurs are displayed, and the Common Patterns area is updated.

502 IBM Netcool Operations Insight: Integration Guide 7. If an event stands out as the cause of the event storm, close the Operations Analytics - Log Analysis GUI, return to the event list, and action the event. If not, continuously refine the search results by searching against keywords until a likely root cause event stands out. For possible actions from the Event Viewer see http://www-01.ibm.com/support/knowledgecenter/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/ web_use_jsel_manageevents.html . For possible actions from the Active Event List, see http:// www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/ webtop/wip/task/web_use_ael_managingevents.html . Other actions are possible, depending on the tools that are implemented in your environment.

Using Topology Search After the topology search capability is configured, you can have Operations Analytics - Log Analysis show you the events that occurred within a specific time period on routes between two devices in the network topology. This capability is useful to pinpoint problems on the network, for example, in response to a denial of service attack on a PE device. The custom apps of the Network Manager Insight Pack can be run from the Operations Analytics - Log Analysis and, depending on your configuration, from the Network Views in Network Manager IP Edition and the event lists in the Web GUI. The custom apps support searches on Layer 2 and Layer 3 of the topology. The custom apps use the network-enriched event data and the topology data from the Network Manager IP Edition NCIM database. They plot the lowest-cost routes across the network between two nodes (that is, network entities) and count the events that occurred on the nodes along the routes. You can specify different time periods for the route and events. The algorithm uses the speed of the interfaces along the routes to calculate the routes that are lowest-cost. That is, the fastest routes from start to end along which a packet can be sent. The network topology is based on the most recent discovery. Historical routes are not accounted for. If your network topology is changeable, the routes between the nodes can change over time. If the network is stable, the routes stay current.

Before you begin • Knowledge of the events in your topology is required to obtain meaningful results from the topology search, for example, how devices are named in your environment, or with what information devices are enriched. Device names are usually indicative of their functions. This level of understanding helps you run searches in Operations Analytics - Log Analysis. • Configure the products to enable the topology search capability. See “Configuring topology search” on page 335. • To avoid reentering user credentials when launching between products, configure SSO. See “Configuring single sign-on for the topology search capability” on page 337. • Create the network views that visualize the parts of the network that you are responsible for and want to search. See https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/admin/task/ adm_crtnwview.html . • Reconfigure your views in the Web GUI to display the NmosObjInst column. The tools that launch the custom apps of the Network Manager Insight Pack work only against events that have a value in this column. See http://www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_cust_settingupviews.html .

Procedure The flow of this procedure is to select the two nodes, select the tool and a time period over which the tool searches the historical event data. Then, in the Operations Analytics - Log Analysis UI, select the route that you are interested in and view the events. You can run searches on the events to refine the results. 1. Run the topology search from one of the products, as follows: • Web GUI event lists: a. In an Event Viewer or AEL, select two rows that have a value in the NmosObjInst column.

Chapter 8. Operations 503 b. Right click and click Event Search > Find events between two nodes > Layer 2 Topology or Event Search > Find events between two nodes > Layer 3 Topology, depending on which layer of the topology you want to search. c. Click a time filter, or click Custom and select one. • Network Manager IP Edition network views: a. Select two devices. b. Click Event Search > Find Events Between 2 Nodes > Layer 2 Topology or Event Search > Find Events Between 2 Nodes > Layer 3 Topology depending on which layer of the topology you want to search. c. Click a time filter, or click Custom and select one. • Operations Analytics - Log Analysis UI. In the Operations Analytics - Log Analysis UI, the app requires search results before you can run it. In the search results, select the NmosObjInst column. The app finds the events between the two nodes on which each selected event originated. Important: Select the NmosObjInst cells only. Do not select the entire rows. If you select the entire rows, no results are found, or incorrect routes between the entities on the network are found. In the Search Dashboards section of the UI, click NetworkManagerInsightPack > Find events between two nodes on layer 2 topology or Find events between two nodes on layer 3 topology, depending which network layer you want to view. See “Example” on page 505 for an example of how to run the apps from the Operations Analytics - Log Analysis UI. The results of the search are displayed on the Operations Analytics - Log Analysis UI as follows: Find alerts between two nodes on layer 2 topology This app shows the distribution of alerts on the least-cost routes between two network end points in a layer 2 topology. Charts show the alert distribution by severity and alert group for each route over the specified time period. The ObjectServer field for the alert group is AlertGroup. A list of the routes is displayed from which you can search the events that occurred on each route over the specified time period. Find alerts between two nodes on layer 3 topology This app shows the distribution of alerts on the least-cost routes between two network end points in a layer 3 topology. Charts show the alert distribution by severity and alert group for each route over the specified time period. The ObjectServer field for the alert group is AlertGroup. A list of the routes is displayed from which you can search the events that occurred on each route over the specified time period. The apps count the events that occurred over predefined periods of time, relative to the current time, or over a custom time period that you can specify. For the predefined time periods, the current time is calculated differently, depending on which product you run the apps from. Network Manager IP Edition uses the current time stamp. The Tivoli Netcool/OMNIbus Web GUI uses the time that is specified in the FirstOccurrence field of the events. Restriction: The Web GUI and Operations Analytics - Log Analysis process time stamps differently. The Web GUI recognizes hours, minutes, and seconds but Operations Analytics - Log Analysis ignores seconds. This problem affects the Show event dashboard by node and Search for events by node. If the time stamp 8 January 2014 07:15:26 AM is passed, Operations Analytics - Log Analysis interprets this time stamp as 8 January 2014 07:15 AM. So, the results of subsequent searches might differ from the search that was originally run. 2. From the bar charts, identify the route that is of most interest. Then, on the right side of the UI, click the link that corresponds to that route. A search result is returned that shows all the events that occurred within the specified time frame on that network route. 3. Refine the search results.

504 IBM Netcool Operations Insight: Integration Guide You can use the patterns that are listed in Search Patterns. For example, to search the results for critical events, click Search Patterns > Severity > Critical. A search string is copied to the search field. Then, click Search. 4. Extend and refine the search as required. For more information about searches in Operations Analytics - Log Analysis, see one of the following links: • Operations Analytics - Log Analysis V1.3.6: https://www.ibm.com/support/knowledgecenter/ SSPFMY_1.3.5/com.ibm.scala.doc/use/iwa_using_ovw.html • Operations Analytics - Log Analysis V1.3.3: https://www.ibm.com/support/knowledgecenter/ SSPFMY_1.3.3/com.ibm.scala.doc/use/iwa_using_ovw.html

Example An example of how to run the custom apps from the Operations Analytics - Log Analysis UI. This example searches between 2 IP addresses: 172.20.1.3 and 172.20.1.5. 1. To run a new search, click Add search and type NodeAlias:"172.20.1.3" OR NodeAlias:"172.20.1.5". Operations Analytics - Log Analysis returns all events that have the NodeAlias 172.20.1.3, or the NodeAlias 172.20.1.5. 2. In the results display, switch to grid view. Scroll across until you see the NmosObjInst column. Identify 2 rows that have different NmosObjInst values. 3. For these rows, select the cells in the NmosObjInst column. 4. In the Search Dashboards section of the UI, click NetworkManagerInsightPack > Find events between two nodes on layer 2 topology or Find events between two nodes on layer 3 topology, depending which network layer you want to view. Related concepts Network Management tasks Use this information to understand the tasks that users can perform using Network Management.

IBM Networks for Operations Insight Networks for Operations Insight adds network management capabilities to the Netcool Operations Insight solution. These capabilities provide network discovery, visualization, event correlation and root- cause analysis, and configuration and compliance management that provide service assurance in dynamic network infrastructures. It contributes to overall operational insight into application and network performance management. For documentation that describes how to install Networks for Operations Insight, see Performing a fresh installation. For documentation that describes how to upgrade from an existing Networks for Operations Insight, or transition to Networks for Operations Insight, see “Upgrade and roll back on premises” on page 183.

Before you begin The Networks for Operations Insight capability is provided through setting up the following products in Netcool Operations Insight: • Network Manager IP Edition, see Network Manager Knowledge Center • Netcool Configuration Manager, see http://www-01.ibm.com/support/knowledgecenter/SS7UH9/ welcome In addition, you can optionally add on performance management capability by setting up the Network Performance Insight product and integrating it with Netcool Operations Insight. Performance management capability includes the ability to display and drill into performance anomaly and flow data. For more information on Network Performance Insight, see the Network Performance Insight Knowledge Center, https://www.ibm.com/support/knowledgecenter/SSCVHB .

Chapter 8. Operations 505 About Networks for Operations Insight Networks for Operations Insight provides dashboard functionality that enables network operators to monitor the network, and network planners and engineers to track and optimize network performance.

About Networks for Operations Insight dashboards As a network operator you can monitor network performance at increasing levels of detail. As a network planner or engineer, you can display the top 10 interfaces based on network congestion, traffic utilization, and quality of service (QoS). You can also run reports to show historical traffic traffic utilization information over different periods of time up to the last 365 days. If you are a network operator, then use the dashboards available in Networks for Operations Insight to monitor performance at the level of detail that you require: • Use the Network Health Dashboard to monitor performance of all devices in a network view. • Use the Device Dashboard to monitor performance at the device or interface level. • Use the Traffic Details dashboard to monitor traffic flow details across a single interface. If you are a network planner or engineer, then use “Network Performance Insight Dashboards” on page 545 to view top 10 information on interfaces across your network, including the following: • Congestion • Traffic utilization • Quality of service You can also run flow data reports for any device and interface over different periods of time up to the last 365 days.

Scenario: Monitoring bandwidth usage If a user or application is using a lot of interface bandwidth this can cause performance degradation across the network. This scenario shows you how to set up the Device Dashboard to monitor bandwidth usage on selected interfaces, and how to navigate into Traffic Details dashboard to see exactly which user or application is using the most bandwidth on that interface. The first steps involve setting up thresholds for bandwidth performance monitoring. Once this is done, you can monitor a device or interface at metric level, and navigate to more detailed information, such as network flow through an interface, to determine what is causing performance degradation. The steps are described in the following table.

Table 74. Scenario for bandwidth performance monitoring Action More information

1. Ensure that the poll definitions snmpInBandwidth Network Manager documentation: and snmpOutBandwidth exist and are set up to poll • Creating poll policies: https://www.ibm.com/ the interfaces for which you want to monitor support/knowledgecenter/SSSHRK_4.2.0/poll/task/ bandwidth. poll_crtpoll.html • Creating poll defintions: https://www.ibm.com/ support/knowledgecenter/SSSHRK_4.2.0/poll/task/ crtpolldef.html

2. Within these poll definitions, define anomaly “Defining anomaly thresholds” on page 537 threshold settings. Performance anomalies in the Device Dashboard and in the Event Viewer are generated based on these threshold settings.

3. Enable the collection of flow data on the devices “Defining traffic flow thresholds” on page 535 and interfaces of interest.

506 IBM Netcool Operations Insight: Integration Guide Table 74. Scenario for bandwidth performance monitoring (continued) Action More information

4. Once your settings in steps 2 and 3 have taken “Monitoring performance data” on page 530 effect, launch the Device Dashboard and point it at the device of interest. Within the Performance Insights portlet, proceed as follows: 1. Select the Interfaces tab. 2. Click Metrics and select snmpInBandwidth or snmpOutBandwidth and find the interface of interest in the table. Note: If the interface of interest is showing a performance anomaly, this means that bandwidth thresholds are being exceeded on this interface. Perform steps 5 and 6 to determine who is using the bandwidth.

5. Right-click the interface of interest and select Show “Displaying traffic data on an interface” on page 534 Traffic Details. The Traffic Details dashboard opens in a separate tab and displays traffic flow through the selected interface.

6. Use the controls in the Traffic Details dashboard to “Traffic Details dashboard views” on page 539 display the traffic flow view of interest. For example, “Monitoring NetFlow performance data from Traffic to see which source device and application is using Details dashboard” on page 543 the most bandwidth, display the Top Sources with Application view.

About the Network Health Dashboard Use the Network Health Dashboard to monitor a selected network view, and display availability, performance, and event data, as well as configuration and event history for all devices in that network view. Related concepts Network Management tasks Use this information to understand the tasks that users can perform using Network Management.

Monitoring the network using the Network Health Dashboard Use this information to understand how to use the Network Health Dashboard to determine if there are any network issues, and how to navigate from the dashboard to other parts of the product for more detailed information. The Network Health Dashboard monitors a selected network view, and displays device and interface availability within that network view. It also reports on performance by presenting graphs, tables, and traces of KPI data for monitored devices and interfaces. A dashboard timeline reports on device configuration changes and event counts, enabling you to correlate events with configuration changes. The dashboard includes the event viewer, for more detailed event information.

Monitoring the Network Health Dashboard Monitor the Network Health Dashboard by selecting a network view within your area of responsibility, such as a geographical area, or a specific network service such as BGP or VPN, and reviewing the data that appears in the other widgets on the dashboard. If you have set up a default network view bookmark that contains the network views within your area of responsibility, then the network views in that bookmark will appear in the network view tree within the dashboard.

Chapter 8. Operations 507 Before you begin For more information about the network view tree in the Network Health Dashboard, see “Configuring the network view tree to display in the Network Health Dashboard” on page 515

About this task Note: The minimum screen resolution for display of the Network Health Dashboard is 1536 x 864. If your screen is less than this minimum resolution, then you will see scroll bars on one or more of the widgets in the Network Health Dashboard.

Displaying device and interface availability in a network view Using the Unavailable Resources widget you can monitor, within a selected network view, the number of device and interface availability alerts that have been open for more than a configurable amount of time. By default this widget charts the number of device and interface availability alerts that have been open for up to 10 minutes, for more than ten minutes but less than one hour, and for more than one hour.

About this task To monitor the number of open device and interface availability alerts within a selected network view, proceed as follows:

Procedure 1. Network Health Dashboard 2. In the Network Health Dashboard, select a network view from the network view tree in the Network Views at the top left. The other widgets update to show information based on the network view that you selected. In particular, the Unavailable Resources widget updates to show device and interface availability in the selected network view. A second tab, called "Network View", opens. This tab contains a dashboard comprised of the Network Views GUI, the Event Viewer, and the Structure Browser, and it displays the selected network view. You can use this second tab to explore the topology of the network view that you are displaying in the Network Health Dashboard. For information about specifying which network view tree to display in the Network Health Dashboard, see “Configuring the network view tree to display in the Network Health Dashboard” on page 515. 3. In the Unavailable Resources widget, proceed as follows: To determine the number of unavailable devices and interface alerts, use the following sections of the chart and note the colors of the stacked bar segments and the number inside each segment. Restriction: By default, all of the bars described below are configured to display. However, you can configure the Unavailable Resources widget to display only specific bars. For example, if you configure the widget to display only the Device Ping and the Interface Ping bars, then only those bars will be displayed in the widget. Note: By default the data in the Unavailable Resources widget is updated every 20 seconds. SNMP Poll Fail Uses color-coded stacked bars to display the number of SNMP Poll Fail alerts within the specified timeframe. SNMP Link State Uses color-coded stacked bars to display the number of SNMP Link State alerts within the specified timeframe. Interface Ping Uses color-coded stacked bars to display the number of Interface Ping alerts within the specified timeframe.

508 IBM Netcool Operations Insight: Integration Guide Device Ping Uses color-coded stacked bars to display the number of Device Ping alerts within the specified timeframe. Color coding of the stacked bars is as follows: Table 75. Color coding in the Unavailable Resources widget

Yellow Number of alerts that have been open for up to 10 minutes.

Pink Number of alerts that have been open for more than 10 minutes and up to one hour.

Blue Number of alerts that have been open for more than one hour.

Click any one of these bars to show the corresponding alerts for the devices and interfaces in the Event Viewer at the bottom of the Network Health Dashboard. Note: You can change the time thresholds that are displayed in this widget. The default threshold settings are 10 minutes and one hour. If your availability requirements are less stringent, then you could change this, for example, to 30 minutes and 3 hours. The change applies on a per-user basis. If none of the devices in the current network view is being polled by any one of these polls, then the corresponding stacked bar will always displays zero values. For example, If none of the devices in the current network view is being polled by the SNMP Poll Fail poll, then the SNMP Poll Fail bar will always displays zero values. If you are able to access the Configure Poll Policies panel in the Network Polling GUI, then you can use the Device Membership field on that table to see a list all of devices across all network views that are polled by the various poll policies.

Displaying overall network view availability You can monitor overall availability of chassis devices within a selected network view using the Percentage Availability widget.

About this task To display overall availability of chassis devices within a selected network view, proceed as follows:

Procedure 1. Network Health Dashboard 2. In the Network Health Dashboard, select a network view from the network view tree in the Network Views at the top left. The other widgets update to show information based on the network view that you selected. In particular, the Percentage Availability widget updates to show overall availability of chassis devices in network view. A second tab, called "Network View", opens. This tab contains a dashboard comprised of the Network Views GUI, the Event Viewer, and the Structure Browser, and it displays the selected network view. You can use this second tab to explore the topology of the network view that you are displaying in the Network Health Dashboard. For information about specifying which network view tree to display in the Network Health Dashboard, see “Configuring the network view tree to display in the Network Health Dashboard” on page 515. 3. In the Percentage Availability widget, proceed as follows: The Percentage Availability widget displays 24 individual hour bars. Each bar displays a value, which is an exponentially weighted moving average of ping results in the past hour; the bar only appears on the completion of the hour. The bar value represents a percentage availability rate rather than a total count within that hour. The color of the bar varies as follows:

Chapter 8. Operations 509 • Green: 80% or more. • Orange: Between 50% and 80%. • Red: Less than 50%.

Displaying highest and lowest performers in a network view You can monitor highest and lowest poll data metrics across all devices and interfaces within a selected network view using the Top Performers widget.

About this task To display highest and lowest poll data metrics across all devices and interfaces within a selected network view, proceed as follows:

Procedure 1. Network Health Dashboard 2. In the Network Health Dashboard, select a network view from the network view tree in the Network Views at the top left. The other widgets update to show information based on the network view that you selected. In particular, the Top Performers widget updates to show overall availability of chassis devices in network view. A second tab, called "Network View", opens. This tab contains a dashboard comprised of the Network Views GUI, the Event Viewer, and the Structure Browser, and it displays the selected network view. You can use this second tab to explore the topology of the network view that you are displaying in the Network Health Dashboard. For information about specifying which network view tree to display in the Network Health Dashboard, see “Configuring the network view tree to display in the Network Health Dashboard” on page 515. 3. In the Top Performers widget, proceed as follows: Select from the following controls to display chart, table, or trace data in the Top Performers widget. Metric Click this drop-down list to display a selected set of poll data metrics. The metrics that are displayed in the drop-down list depend on which poll policies are enabled for the selected network view. Select one of these metrics to display associated data in the main part of the window. Order Click this drop-down list to display what statistic to apply to the selected poll data metric. • Statistics available for all metrics, except the SnmpLinkStatus metric. From Top: Displays a bar chart or table that shows the 10 highest values for the selected metric. The devices or interfaces with these maximum values are listed in the bar chart or table. From Bottom: Displays a bar chart or table that shows the 10 lowest values for the selected metric. The devices or interfaces with these minimum values are listed in the bar chart or table. • Statistics available for the SnmpLinkStatus metric. In each case, a bar chart or table displays and shows devices for the selected statistic. Unavailable: This statistic displays by default. Devices with this statistic are problematic. Admin Down Devices with this statistic are not problematic as Administrators change devices to this state. Available Devices with this statistic are not problematic. Note: The widget lists devices or interfaces depending on which metric was selected: • If the metric selected applies to a device, such as memoryUtilization, then the top 10 list contains devices.

510 IBM Netcool Operations Insight: Integration Guide • If the metric selected applies to an interface, such as ifInDiscards, then the top 10 list contains interfaces.

Show Chart Displays a bar chart with the 10 highest or lowest values. Show Chart is the display option when you first open the widget.

Show Table Displays a table of data associated with the 10 highest or lowest values.

Define Filter

This button only appears if you are in Show Table mode. Click here to define a filter to apply to the Top Performers table data. The main part of the window contains the data in one of the following formats: Chart Bar chart with the 10 highest or lowest values. Click any bar in the chart to show a time trace for the corresponding device or interface. Table Table of data associated with the 10 highest or lowest values. The table contains the following columns: • Entity Name: Name of the device or interface. • Show Trace: Click a link in one of the rows to show a time trace for the corresponding device or interface. • Last Poll Time: Last time this entity was polled. • Value: Value of the metric the last time this entity was polled. Trace Time trace of the data for a single device or interface. Navigate within this trace by performing the following operations: • Zoom into the trace by moving your mouse wheel forward. • Zoom out of the trace by moving your mouse wheel backward. • Double click to restore the normal zoom level. • Click within the trace area for a movable vertical line that displays the exact value at any point in time. Click one of the following buttons to specify which current or historical poll data to display in the main part of the window. This button updates the data regardless of which mode is currently being presented: bar chart, table, or time trace. Restriction: If your administrator has opted not to store poll data for any of the poll data metrics in the Metric drop-down list, then historical poll data will not be available when you click any of the following buttons: • Last Day • Last Week • Last Month • Last Year Current Click this button to display current raw poll data. When in time trace mode, depending on the frequency of polling of the associated poll policy, the time trace shows anything up to two hours of data.

Chapter 8. Operations 511 Last Day Click this button to show data based on a regularly calculated daily average. • In bar chart or table mode, the top 10 highest or lowest values are shown based on a daily exponentially weighted moving average (EWMA). • In time trace mode, a time trace of the last 24 hours is shown, based on the average values. In the Last Day section of the widget EWMA values are calculated by default every 15 minutes and are based on the previous 15 minutes of raw poll data. The data presented in this section of the widget is then updated with the latest EWMA value every 15 minutes. Last Week Click this button to show data based on a regularly calculated weekly average. • In bar chart or table mode, the top 10 highest or lowest values are shown based on a weekly exponentially weighted moving average (EWMA). • In time trace mode, a time trace of the last 7 days is shown, based on the average values. In the Last Week section of the widget EWMA values are calculated by default every 30 minutes and are based on the previous 30 minutes of raw poll data. The data presented in this section of the widget is then updated with the latest EWMA value every 30 minutes. Last Month Click this button to show data based on a regularly calculated monthly average. • In bar chart or table mode, the top 10 highest or lowest values are shown based on a monthly exponentially weighted moving average (EWMA). • In time trace mode, a time trace of the last 30 days is shown, based on the average values. In the Last Month section of the widget EWMA values are calculated by default every two hours and are based on the previous two hours of raw poll data. The data presented in this section of the widget is then updated with the latest EWMA value every two hours. Last Year Click this button to show data based on a regularly calculated yearly average. • In bar chart or table mode, the top 10 highest or lowest values are shown based on a yearly exponentially weighted moving average (EWMA). • In time trace mode, a time trace of the last 365 days is shown, based on the average values. In the Last Year section of the widget EWMA values are calculated by default every day and are based on the previous 24 hours of raw poll data. The data presented in this section of the widget is then updated with the latest EWMA value every day.

Displaying the Configuration and Event Timeline You can display a timeline showing, for all devices in a selected network view, device configuration changes and network alert data over a time period of up to 24 hours using the Configuration and Event Timeline widget. Correlation between device configuration changes and network alerts on this timeline can help you identify where configuration changes might have led to network issues.

About this task To display a timeline showing device configuration changes and network alert data for all devices in a selected network view, proceed as follows:

Procedure 1. Network Health Dashboard 2. In the Network Health Dashboard, select a network view from the network view tree in the Network Views at the top left. The other widgets update to show information based on the network view that you selected.

512 IBM Netcool Operations Insight: Integration Guide In particular, the Configuration and Event Timeline updates to show configuration change and event data for the selected network view. A second tab, called "Network View", opens. This tab contains a dashboard comprised of the Network Views GUI, the Event Viewer, and the Structure Browser, and it displays the selected network view. You can use this second tab to explore the topology of the network view that you are displaying in the Network Health Dashboard. For information about specifying which network view tree to display in the Network Health Dashboard, see “Configuring the network view tree to display in the Network Health Dashboard” on page 515. 3. In the Configuration and Event Timeline widget, proceed as follows: Configuration changes displayed in the Configuration and Event Timeline can be any of the following. Move your mouse over the configuration change bars to view a tooltip listing the different types of configuration change made at any time on the timline. Note: If you do not have Netcool Configuration Manager installed, then no configuration data is displayed in the timeline. Changes managed by Netcool Configuration Manager These changes are made under full Netcool Configuration Manager control. The timeline differentiates between scheduled or policy-based changes, which can be successful (Applied) or unsuccessful (Not Applied), and one-time changes made using the IDT Audited terminal facility within Netcool Configuration Manager. Applied A successful scheduled or policy-based set of device configuration changes made under the control of Netcool Configuration Manager. Not Applied An unsuccessful scheduled or policy-based set of device configuration changes made under the control of Netcool Configuration Manager. IDT Device configuration changes made using the audited terminal facility within Netcool Configuration Manager that allows one-time command-line based configuration changes to devices. Unmanaged changes OOBC Out-of-band-change. Manual configuration change made to device where that change is outside of the control of Netcool Configuration Manager. Events are displayed in the timeline as stacked bars, where the color of each element in the stacked bar indicates the severity of the corresponding events. Move your mouse over the stacked bars to view a tooltip listing the number of events at each severity level. The X-axis granularity for both events and configuration changes varies depending on the time range that you select for the timeline.

Table 76. X axis granularity in the Configuration and Event Timeline

If you select this time range Then the X axis granularity is

6 hours 15 minutes

12 hours 30 minutes

24 hours 1 hour

For more detailed information on the different types of configuration change, see the Netcool Configuration Manager knowledge center at http://www-01.ibm.com/support/knowledgecenter/ SS7UH9/welcome .

Chapter 8. Operations 513 Select from the following controls to define what data to display in the Configuration and Event Timeline. Time Select the duration of the timeline: • 6 Hours: Click to set a timeline duration of 6 hours. • 12 Hours: Click to set a timeline duration of 12 hours. • 24 Hours: Click to set a timeline duration of 24 hours. Events by Occurrence • First Occurrence: Click to display events on the timeline based on the first occurrence time of the events. • Last Occurrence: Click to display events on the timeline based on the last occurrence time of the events.

Show Table Displays the configuration change data in tabular form. The table contains the following columns. Note: If you do not have Netcool Configuration Manager installed, then this button is not displayed. • Number: Serial value indicating the row number. • Device: Host name or IP address of the affected device. • Unit of Work (UoW): In the case of automated Netcool Configuration Manager configuration changes, the Netcool Configuration Manager unit of work under which this configuration change was processed. • Result: Indicates whether the change was successful. • Start Time: The time at which the configuration change began. • End Time: The time at which the configuration change completed. • User: The user who applied the change. • Description: Textual description associated with this change.

Show Chart Click here to switch back to the default graph view. Note: If you do not have Netcool Configuration Manager installed, then this button is not displayed. Use the sliders under the timeline to zoom in and out of the timeline. The legend under the timeline shows the colors used in the timeline to display the following items: • Event severity values. • Configuration change types. Note: If the integration with Netcool Configuration Manager has been set up but there is a problem with data retrieval from Netcool Configuration Manager, then the configuration change types shown in the legend are marked with the following icon:

514 IBM Netcool Operations Insight: Integration Guide Configuring the Network Health Dashboard As an end user, you can configure the Network Health Dashboard to display the data you want to see.

Configuring the network view tree to display in the Network Health Dashboard As a user of the Network Health Dashboard, you can configure a default bookmark to ensure that you limit the data that is displayed in the Network Health Dashboard to the network views within your area of responsibility.

About this task The network views tree in the Network Health Dashboard automatically displays the network views in your default network view bookmark. If there are no network views in your default bookmark, then a message is displayed with a link to the Network Views GUI, where you can add network views to your default bookmark. The network views that you add to your default bookmark will be displayed in the network tree within the Network Health Dashboard. Complete the following steps to add network views to your default bookmark.

Procedure 1. Within the displayed message, click the link that is provided. The Network Views GUI opens in a second tab. 2. Follow the instructions in the following topic in the Network Manager Knowledge Center: https:// www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/visualize/task/ vis_addingnetworkviewstobookmark.html

Results The network views tree in the Network Health Dashboard displays the network views in your newly configured default bookmark.

Configuring the Unavailable Resources widget As a user of the Network Health Dashboard, you can configure which availability data is displayed in the Network Health Dashboard by the Unavailable Resources widget. For example you can configure the widget to display availability data based on ping polls only, and not based on SNMP polls. You can also configure the time duration thresholds to apply to availability data displayed in this widget. For example, by default the widget charts the number of device and interface availability alerts that have been open for up to 10 minutes, more than 10 minutes, and more than one hour. Yon can change these thresholds.

About this task To configure which availability data is displayed by the Unavailable Resources widget, proceed as follows:

Procedure 1. Network Health Dashboard

2. In the Unavailable Resources widget, click User Preferences . 3. To configure the Unavailable Resources widget, use the following checkboxes and number steppers: Device Configure which device alerts to monitor in the Unavailable Resources widget in order to retrieve information on device availability. By default all of these boxes are checked. Device Ping Check the box to monitor Default Chassis Ping alerts. Selecting this option causes the Unavailable Resources widget to provide an indication of the number of open device ICMP (ping) polling alerts.

Chapter 8. Operations 515 SNMP Poll Fail Check the box to monitor SNMP Poll Fail alerts. Selecting this option causes the Unavailable Resources widget to provide an indication of the number of open SNMP Poll Fail alerts. Interface Configure which interface alerts to monitor in the Unavailable Resources widget in order to retrieve information on interface availability. By default all of these boxes are checked. Interface Ping Check the box to monitor Default Interface Ping alerts. Selecting this option causes the Unavailable Resources widget to provide an indication of the number of open interface ICMP (ping) polling alerts. Link State Check the box to monitor SNMP Link State alerts. Selecting this option causes the Unavailable Resources widget to provide an indication of the number of open SNMP Link State alerts. Thresholds Upper Specify an upper threshold in hours and minutes. By default, the upper threshold is set to one hour. This threshold causes the chart in the Unavailable Resources widget to update as follows: when the amount of time that any availability alert in the selected network view remains open exceeds the one hour threshold, then the relevant bar in the Unavailable Resources chart updates to show this unavailability as a blue color-coded bar section. Lower Specify a lower threshold in hours and minutes. By default, the lower threshold is set to 10 minutes. This threshold causes the chart in the Unavailable Resources widget to update as follows: when the amount of time that any availability alert in the selected network view remains open exceeds the 10 minute threshold, then the relevant bar in the Unavailable Resources chart updates to show this unavailability as a as a pink color-coded bar section.

Configuring the Configuration and Event Timeline You can configure which event severity values to display on the Configuration and Event Timeline.

About this task To configure which event severity values to display on the Configuration and Event Timeline:

Procedure 1. Network Health Dashboard

2. In the Configuration and Event Timeline widget, click User Preferences . 3. To configure the Configuration and Event Timeline, use the following lists: Available Severities By default, lists all event severity values and these event severity values are all displayed in the Configuration and Event Timeline. To remove an item from this list, select the item and click the right-pointing arrow. You can select and move multiple values at the same time. Selected Severities By default, no event severity values are displayed in this list. Move items from the Available Severities list to this list to show just those values in the Configuration and Event Timeline. For example, to show only Critical and Major in the Configuration and Event Timeline, move the Critical and Major items from the Available Severities list to the Selected Severities list. To remove an item from this list, select the item and click the left-pointing arrow. You can select and move multiple values at the same time.

516 IBM Netcool Operations Insight: Integration Guide Administering the Network Health Dashboard Perform these tasks to configure and maintain the Network Health Dashboard for users.

Before you begin Device configuration change data can only be displayed in the Configuration and Event Timeline if the integration with Netcool Configuration Manager has been set up. For more information on the integration with Netcool Configuration Manager, see the following topic in the Network Manager Knowledge Center: https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/install/task/ con_configintegrationwithncm.html

About this task Note: The minimum screen resolution for display of the Network Health Dashboard is 1536 x 864. If your screen is less than this minimum resolution, then you will see scroll bars on one or more of the widgets in the Network Health Dashboard.

Configuring the Network Health Dashboard As an administrator, you can configure how data is displayed, and which data is displayed in the Network Health Dashboard.

About this task To fit the quantity of widgets onto a single screen, customers need a minimum resolution of 1536 x 864, or higher. As an administrator, you can configure the Network Health Dashboard in a number of ways to meet the needs of your operators. Changing the layout of the dashboard You can change the layout of the dashboard. For example, you can reposition, or resize widgets. See the information about Editing dashboard content and layout on the Jazz for Service Management Knowledge Center: https://www.ibm.com/support/knowledgecenter/SSEKCU Change the refresh period for all widgets on the Network Health Dashboard The Network Manager widgets within the Network Health Dashboard update by default every 20 seconds. You can change this update frequency by performing the following steps. Note: The Event Viewer widget updates every 60 seconds by default. 1. Edit the the following configuration file: $NMGUI_HOME/profile/etc/tnm// nethealth.properties. 2. Find the following line and update the refresh period to the desired value in seconds.

nethealth.refresh.period=60

3. Save the file. 4. Close and reopen the Network Health Dashboard tab to put the changes into effect. Change the colors associated with event severity values used in the Configuration and Event Timeline You can update the colors associated with event severity values used in the Configuration and Event Timeline, by performing the following steps: 1. Edit the following configuration file: $NMGUI_HOME/profile/etc/tnm/status.properties. 2. Find the properties status.color.background.severity_number, where severity_number corresponds to the severity number. For example 5 corresponds to Critical severity. 3. Change the RGB values for the severity values, as desired. 4. Save the file.

Chapter 8. Operations 517 Disable launch of the Network View tab when selecting a network view in the Network Health Dashboard When a user selects a network view in the Network Health Dashboard, by default a second tab is opened, called "Network View". This tab contains a dashboard comprised of the Network Views GUI, the Event Viewer, and the Structure Browser, and displaying the selected network view. If your network views are very large, then displaying this second tab can have an impact on system performance. To avoid this performance impact, you can disable the launch of the Network View tab by performing the following steps: 1. Edit the following configuration file: $NMGUI_HOME/profile/etc/tnm/topoviz.properties. 2. Find the following lines:

# Defines whether the dashboard network view tree fires a launchPage event when the user clicks a view in the tree topoviz.networkview.dashboardTree.launchpage.enabled=true

3. Set the property topoviz.networkview.dashboardTree.launchpage.enabled to false. 4. Save the file.

Troubleshooting the Network Health Dashboard Use this information to troubleshoot the Network Health Dashboard.

Network Health Dashboard log files Review the Network Health Dashboard log files to support troubleshooting activity. The Network Health Dashboard log files can be found at the following locations:

Table 77. Locations of Network Health Dashboard log files File Location Log file $NMGUI_HOME/profile/logs/tnm/ncp_nethealth.0.log Trace file $NMGUI_HOME/profile/logs/tnm/ncp_nethealth.0.trace

Data sources for the Network Health Dashboard widgets Use this information to understand from where the Network Health Dashboard widgets retrieve data. This information might be useful for troubleshooting data presentation issues in the Network Health Dashboard.

Configuration and Event Timeline widget This widget is populated by the following integrations: • Tivoli Netcool/OMNIbus integration that analyzes Tivoli Netcool/OMNIbus events and shows a count based on event severity in a specified period. • Netcool Configuration Manager integration that that retrieves configuration change distribution.

Percentage Availability widget The data source for this widget is the historical poll data table pdEwmaForDay. The widget displays data from the device poll PingResult from the pdEwmaForDay table, scoped as follows: • Scope is the selected network view if called from the Network Health Dashboard • Scope is the in-context devices or interfaces if called from a right-click command within a topology map. Note: The widget is updated only at the end of the hour to which the data applies.

Top Performers widget The data sources for this widgets are the various historical poll data tables: • pdEwmaForDay

518 IBM Netcool Operations Insight: Integration Guide • pdEwmaForWeek • pdEwmaForMonth • pdEwmaForYear The scope of the data is as follows: • Scope is the selected network view if called from the Network Health Dashboard • Scope is the in-context devices or interfaces if called from a right-click command within a topology map.

Unavailable Resources widget This widget is populated by a Tivoli Netcool/OMNIbus integration that analyzes Tivoli Netcool/OMNIbus events and uses the event data to determine whether a device or interface is affected, and whether the issue is ICMP or SNMP-based.

Investigating data display issues in the Network Health Dashboard If any of the widgets in the Network Health Dashboard are not displaying data, either there is no data to display, or there is an underlying problem that needs to be resolved As an administrator, you can configure poll policies and poll definition so that users are able to display the data that they need to see in the Network Health Dashboard. You can also explore other potential underlying issues, such as problems with the underlying systems that process and store historical poll data.

About this task To configure poll policies, see the following topic in the Network Manager Knowledge Center: https:// www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/poll/task/ poll_creatingpollswithmultiplepolldefinitions.html . When editing a poll policy editor, the following operations in the Poll Policy Editor are important for determining whether data from the poll policy will be available for display in the Network Health Dashboard: Poll Enabled Check this option to enable the poll policy. Store? Check this option to store historical data for the poll policy. Note: Checking this option will activate the historical poll data storage system, which will store large amounts of data on your system. For more information on the historical poll data storage system, see the following topic in the Network Manager Knowledge Center: https://www.ibm.com/support/ knowledgecenter/SSSHRK_4.2.0/poll/task/poll_administeringstorm.html Network Views In this tab, ensure that the poll policy is active in the network views that you want to monitor in the Network Health Dashboard. Configure poll policies as follows in order to make data available in the various widgets of the Network Health Dashboard. For information on the different Network Manager poll policies and poll definitions, see the following topic on the Network Manager Knowledge Center: https://www.ibm.com/support/knowledgecenter/ SSSHRK_4.2.0/ref/reference/ref_pollingref.html Unavailable Resources widget Configure the following poll policies in order to make data available in the Unavailable Resources widget:

Chapter 8. Operations 519 Table 78. Unavailable Resources widget: which poll policies to configure If you want to show response data for Then ensure that one or more of the following poll policies is enabled in the appropriate network views Chassis devices based on ping polling Any poll policy that uses one or more chassis ping poll definitions. An example of a poll policy of this type is the Default Chassis Ping poll policy Interfaces based on ping polling Any poll policy that uses one or more chassis ping poll definitions. An example of a poll policy of this type is the Default Interface Ping poll policy. Chassis devices based on SNMP polling Any poll policy that uses one or more SNMP poll definitions. An example of a poll policy of this type is the snmpInBandwidth poll policy. Interfaces based on SNMP polling SNMP Link State poll policy.

Percentage Availability widget Enable the Default Chassis Ping poll policy in order to display overall chassis availability data in the Percentage Availability widget. Top Performers widget Metrics in the Top Performers widget To show a specific metric in the Top Performers widget Metric drop-down list, you must enable the poll policy that contains a poll definition related to that metric. Alternatively, create a new poll definition and add it to an enabled poll policy. Note: These must be poll definition that can be stored and that falls into one of the following types: • Basic threshold • Ping • SnmpLinkState For example, using the default poll policies and poll definitions provided with Network Manager, here are examples of poll policies to enable and the corresponding metric that will be made available in the Metric drop-down list:

Table 79. Top Performers widget: examples of poll policies to configure To display this metric Enable this poll policy Which contains this poll definition ifInDiscards ifInDiscards ifInDiscards ifOutDiscards ifOutDiscards ifOutDiscards snmpInBandwidth snmpInBandwidth snmpInBandwidth

Historical poll data in the Top Performers widget You can display historical poll data for a metric in the Top Performers widget by clicking the Last Day, Last Week, Last Month, and Last Year buttons. To collect historical poll data to display in this way, you must select the option to store historical data for the related poll definition related to the metric. For example, using the default poll policies and poll definitions provided with Network Manager, here are examples of poll definitions to configure.

520 IBM Netcool Operations Insight: Integration Guide Note: Historical data will only be viewable in the Top Performers once it has been collected, processed, and stored in the NCPOLLDATA database. For example, if at the time of reading this you had selected the option to store poll data for a poll definition one month ago, then you will only see one month's worth of data in the Last Year option.

Table 80. Top Performers widget: examples of poll definitions to configure To display historical data for Select the Store? option for this Within this poll policy this metric poll definition ifInDiscards ifInDiscards ifInDiscards ifOutDiscards ifOutDiscards ifOutDiscards snmpInBandwidth snmpInBandwidth snmpInBandwidth

Important: If you have correctly configured storage of historical poll data for the metrics you are interested in, but when you click any of the Last Day, Last Week, Last Month, and Last Year buttons you are not seeing any data, then there might be a problem with the underlying systems that process and store historical poll data. In particular, the Apache Storm system that processes historical poll data, might not be running, or Apache Storm might have lost connection to the NCPOLLDATA database, where historical poll data is stored. For more information, see the following topic in the Network Manager Knowledge Center: https://www.ibm.com/support/ knowledgecenter/SSSHRK_4.2.0/poll/task/poll_administeringstorm.html Configuration and Event Timeline If a configuration with Netcool Configuration Manager was set up at installation time, but configuration change data does not appear in the Configuration and Event Timeline this might be due to integration issues. For more information on the integration with Netcool Configuration Manager, see the following topic in the Network Manager Knowledge Center: https://www.ibm.com/ support/knowledgecenter/SSSHRK_4.2.0/install/task/con_configintegrationwithncm.html

Top Performers widget is unable to display values greater than 32 bit The Top Performers widget is unable to display values greater than 32 bit. If no data is being displayed in the Top Performers widget for a selected metric, then this might be due to a number of factors. One possibility is that the value of data in that metric is greater than 32 bit. If no data is being displayed in the Top Performers widget for a selected metric, then run the following SQL query to determine if there is an error, and if, what the error code is.

SELECT errorcode, value, datalabel FROM ncpolldata.polldata pd INNER JOIN ncpolldata.monitoredobject mo ON mo.monitoredobjectid = pd.monitoredobjectid WHERE datalabel =poll_of_interest

The error code 112 indicates that metric contains a polled value that was greater than can be stored in a 32-bit integer field.

Percentage Availability widget takes a long time to refresh If the Percentage Availability widget is taking a long time to refresh then one possible solution is to increase the number of threads available for this widget. This solution is most suitable for customers with large networks.

About this task Increase the number of threads available by performing the following steps:

Procedure 1. Edit the the following configuration file: $NMGUI_HOME/profile/etc/tnm// nethealth.properties.

Chapter 8. Operations 521 2. Find the following lines:

## Widget thread count for availability widget nethealth.threads.availability=5

3. Increase the value of the nethealth.threads.availability property. The maximum possible value is 10. 4. Save the file.

Developing custom dashboards You can create pages that act as "dashboards" for displaying information on the status of parts of your network or edit existing dashboards, such as the Network Health Dashboard. You can select from the widgets that are provided with Network Manager, Tivoli Netcool/OMNIbus Web GUI, and also from other products that are deployed in your Dashboard Application Services Hub environment.

About this task For information on creating and editing pages in the Dashboard Application Services Hub, see the Jazz for Service Management information center at https://www.ibm.com/support/knowledgecenter/SSEKCU . Before you begin • – Determine which widgets you want on the page. – If you want a custom Web GUI gauge on the page, develop the metric that will feed the gauge display. – Decide which users, groups, or user roles you want to have access to the page and assign the roles accordingly. – If you want the widgets to communicate in a custom wire, develop the wires that will control the communications between the widgets. Related concepts Network Management tasks Use this information to understand the tasks that users can perform using Network Management.

Displaying an event-driven view of the network You can configure a dashboard to contain a Dashboard Network Views widget and other widgets that are driven by network alert data. Under the configuration described here, when a user clicks a node in the network view tree in the Dashboard Network Views widget, the other widgets in the dashboard update to show data based on events for entities in the selected network view. This dashboard is useful for network operations centers that want to see the real-time situation on the network, as it provides a real-time view of network alerts and of the availability of devices and interfaces. No historical polling data is used by any of the widgets in this dashboard, so this provides an alternative to the Network Health Dashboard if polling data is not being stored.

Before you begin Depending on the requirements you have of the page, perform some or all of the tasks described in “Developing custom dashboards” on page 522.

About this task To develop a page that wires a Dashboard Network Views widget and other widgets that are driven by network alert data:

Procedure 1. Log in as a user that has the iscadmins role. 2. Create the page, assign it to a location in the navigation, and specify the roles that users need to view the page.

522 IBM Netcool Operations Insight: Integration Guide The default location is Default, and in this task it is assumed that this default location is used. If you use a different location, then substitute your chosen location wherever you see the location Default used in this task. 3. Add the Dashboard Network Views widget to the page. 4. Add the Configuration and Event Timeline to the page. If you have configured the Network Manager integration with Netcool Configuration Manager then this widget displays a timeline up to a period of 24 hours, showing configuration change data and event data by first occurrence of the event. If you have not set up the integration, then the widget still displays event data on the timeline. 5. Add the Unavailable Resources widget to the page. This widget displays how many devices and interfaces within the selected network view are unavailable. 6. Add the Event Viewer to the page. 7. Click Save and Exit to save the page. Note: This page does not requires any wires. The Dashboard Network Views widget automatically broadcasts the NodeClickedOn event, and the other widgets automatically subscribe to this event and update their data accordingly.

Results You can now click a network view in the network view tree in the Dashboard Network Views widget and the other widgets automatically update to show event data: • The Unavailable Resources widget displays a bar chart showing how many devices and interfaces within the selected network view are unavailable. The exact data shown in this widget depends on whether the following poll policies are enabled: – Devices: Default Chassis Ping and SNMP Poll Fail poll policies must be enabled. – Interfaces: Default Interface Ping and SNMP Link State poll policies must be enabled. • The Configuration and Event Timeline displays a timeline showing events by first occurrence, and, if the Netcool Configuration Manager integration is configured, configuration change data, for all entities in the network view. • The Event Viewer shows events for all entities in the network view. Note: Clicking a bar in the Unavailable Resources widget further filters the Event Viewer to show only the availability events related to the devices or interfaces in that bar.

Displaying and comparing top performer data for entities in a network view Create a dashboard containing multiple Top Performers widgets to enable you to compare historical poll data across multiple entities and metrics in a selected network view. This dashboard is particularly useful for background investigation and analysis in order to determine how devices and interfaces are performing over time and whether there are any underlying issues.

Before you begin Depending on the requirements you have of the page, perform some or all of the tasks described in “Developing custom dashboards” on page 522.

About this task To develop a page that wires a Network Views widget and multiple Top Performers widgets to enable you to compare historical poll data across multiple entities and metrics in a selected network view:

Procedure 1. Log in as a user that has the iscadmins role. 2. Create the page, assign it to a location in the navigation, and specify the roles that users need to view the page.

Chapter 8. Operations 523 The default location is Default, and in this task it is assumed that this default location is used. If you use a different location, then substitute your chosen location wherever you see the location Default used in this task. 3. Add the Network Views widget to the page. 4. Add two Top Performers widgets to the page. Note: Adding two Top Performers widgets enables you to perform basic comparisons, such as displaying metric traces on the same device or interface over two different time periods. You can add more than two Top Performers widgets, and this will provide the ability to perform comparisons across a wider range of data; for example, adding four Top Performers widgets enables you to display metric traces on the same device or interface over four different time periods. 5. Click Save and Exit to save the page. Note: This page does not requires any wires. The Network Views widget automatically broadcasts the NodeClickedOn event, and the other widgets automatically subscribe to this event and update their data accordingly.

What to do next You can use this dashboard to compare metric traces or charts.

Example: comparing metric traces on the same device or interface over different time periods Use the custom dashboard that contains the two Top Performers widgets to compare metric traces on the same device or interface over different time periods; for example, you might see a spike in the current raw data trace for a metric such as snmpInBandwidth on a specific interface. To determine if this is just an isolated spike or a more serious ongoing issue, you can, on the same dashboard, also display a trace for the same snmpInBandwidth metric on the same interface over a longer time period, such as the last day or last week, and then visually determine if there have been continual incidences of high snmpInBandwidth on this interface over the last day or week.

About this task To use the dashboard to compare metric traces on the same device or interface over different time periods, proceed as follows:

Procedure 1. In the Network Views widget, select a network view. The two Top Performers widgets update to show data for the selected network view. 2. From each of the Top Performers widgets, click the Metric drop-down list and select a metric of interest; for example snmpInBandwidth. Select the same metric on both Top Performers widgets; this ensures that the top entity in both chart is always the same. 3. In one of the Top Performers widgets, click the top bar to show the trace for the entity with the top value in that chart. This displays a time-based trace of current raw data for the snmpInBandwidth metric. 4. In the other Top Performers widget, click the top bar to show the trace for the entity with the top value in that chart. You are now showing the identical time trace in both widgets. 5. In the second Top Performers widget, change the timeframe; for example, click Last Day. You are now showing a current raw data trace of snmpInBandwidth data in the first widget, and a trace of the last day's worth of snmpInBandwidth data for the same interface in the second widget, and you can compare the more transient raw data in the first widget with data averages over last day.

Example: comparing different metric traces on the same device or interface Use the custom dashboard that contains the two Top Performers widgets to compare different metric traces on the same device or interface; for example, you might see a number of incidences of high snmpInBandwidth on a specific interface over the last day. To determine if the high incoming SNMP

524 IBM Netcool Operations Insight: Integration Guide bandwidth usage on this interface is affecting the outgoing SNMP bandwidth usage on that same interface, you can, on the same dashboard, also display a trace for the snmpOutBandwidth metric on the same interface and also over the last day, and then visually compare the two traces.

About this task To use the dashboard to compare different metric traces on the same device or interface, proceed as follows:

Procedure 1. In the Network Views widget, select a network view. 2. From each of the Top Performers widgets, click the Metric drop-down list and select a metric of interest; for example snmpInBandwidth. Select the same metric on both Top Performers widgets; this ensures that the top entity in both chart is always the same. 3. In one of the Top Performers widgets, click the top bar to show the trace for the entity with the top value in that chart. This displays a time-based trace of current raw data for the snmpInBandwidth metric. 4. In the other Top Performers widget, click the top bar to show the trace for the entity with the top value in that chart. You are now showing the identical time trace in both widgets. 5. In the second Top Performers widget, click the Metric drop-down list and select snmpOutBandwidth. You are now displaying a trace of incoming SNMP bandwidth usage on the interface with the highest incoming SNMP bandwidth usage in the network view on one widget, and a trace of outgoing SNMP bandwidth usage on that same interface. You can now visually compare the two traces to see if there is any correlation.

Example: comparing different Top 10 metric charts Use the custom dashboard that contains the two Top Performers widgets to compare different Top 10 metric charts. This enables you to see the potential impact of one metric on another across the devices that are showing the highest performance degradation on the first metric. For example, you might want to compare the chart showing those devices showing the highest ten incoming SNMP bandwidth usage values, with the chart showing those devices showing the highest ten outgoing SNMP bandwidth usage values.

About this task To use the dashboard to compare different Top 10 metric charts, proceed as follows:

Procedure 1. In the Network Views widget, select a network view. 2. In one of the Top Performers widgets, click the Metric drop-down list and select a metric of interest; for example snmpInBandwidth. The Top Performers widget updates to show a bar chart of the ten interfaces in the network view with the highest incoming SNMP bandwidth usage values. 3. In the other Top Performers widget, click the Metric drop-down list and select a second metric of interest; for example snmpOutBandwidth. The Top Performers widget updates to show a bar chart of the ten interfaces in the network view with the highest outgoing SNMP bandwidth usage values.

Results You can now compare the two charts to see if there is any correlation between the data.

Chapter 8. Operations 525 Displaying network view event data in a gauge group You can use wires to configure a Network Views widget and a Dashboard Application Services Hub gauge group to pass data between each other. Under the configuration described here, when a user clicks a node in the network view tree in the Network Views widget, the gauge group updates to show a number of status gauges: you can configure as many status gauges as desired. In the example described here, three gauges are configured: Severity 3 (minor), Severity 4 (major), and Severity 5 (critical), together with a number within each status gauge indicating how many events at that severity are currently present on the devices in that network view. The instructions in this topic describe a possible option for wiring the two widgets.

Before you begin Depending on the requirements you have of the page, perform some or all of the tasks described in “Developing custom dashboards” on page 522.

About this task To develop a page that wires a Network Views widget and a Dashboard Application Services Hub gauge group:

Procedure 1. Log in as a user that has the iscadmins role. 2. Create the page, assign it to a location in the navigation, and specify the roles that users need to view the page. The default location is Default, and in this task it is assumed that this default location is used. If you use a different location, then substitute your chosen location wherever you see the location Default used in this task. 3. Add the Network Views widget to the page. 4. Add the Dashboard Application Services Hub gauge group to the page. 5. Edit the gauge group widget. 6. Select a dataset for the gauge group. In the Gauge Group: Select a Dataset window, search for the Netcool/OMNIbus WebGUI > All data > Filter Summary dataset. One way to do this is as follows: a) In the search textbox at the top left of the Gauge Group: Select a Dataset window, type filter. b) Click Search. This search retrieves two Filter Summary datasets. c) Select the dataset that has a provider title labeled Provider: Netcool/OMNIbus WebGUI > Datasource: All data. 7. Configure how you want the gauge group to be displayed. In the Gauge Group: Visualization Settings window, add three value status gauges by performing the following steps:. a) Click Choose Widget and select ValueStatus Gauge from the drop-down list. Then click Add b) Add two more ValueStatus Gauge widgets, following the instruction in the previous step. There should now be three ValueStatus Gauge widgets listed in the Selected Widgets list. 8. Configure the three value status gauges to show the following: • First value status gauge will display the number of Severity 3 (minor) events, within the Severity 3 (minor) symbol, . • Second value status gauge will display the number of Severity 4 (major ) events, within the Severity 4 (major) symbol, . • Third value status gauge will display the number of Severity 5 (critical) events, within the Severity 5 (critical) symbol, . Perform the following steps to configure the Severity 3 (minor) value status gauge: a) Select the first value status gauge item in the Selected Widgets list.

526 IBM Netcool Operations Insight: Integration Guide b) Click Required Settings. c) Click the Value drop-down list and select Severity 3 Event Count from the drop-down list. d) Click Optional Settings. e) Click the Label above Gauge drop-down list and select Severity 3 Event Count Name from the drop-down list. f) In the Minor spinner set a threshold value of 0 by typing 0. This threshold value causes any number of Severity 3 (minor) events to generate a Severity 3 value status gauge. Perform the following steps to configure the Severity 4 (major) value status gauge: a) Select the first value status gauge item in the Selected Widgets list. b) Click Required Settings. c) Click the Value drop-down list and select Severity 4 Event Count from the drop-down list. d) Click Optional Settings. e) Click the Label above Gauge drop-down list and select Severity 4 Event Count Name from the drop-down list. f) In the Major spinner set a threshold value of 0 by typing 0. This threshold value causes any number of Severity 4 (major) events to generate a Severity 4 value status gauge. Perform the following steps to configure the Severity 5 (critical) value status gauge: a) Select the first value status gauge item in the Selected Widgets list. b) Click Required Settings. c) Click the Value drop-down list and select Severity 5 Event Count from the drop-down list. d) Click Optional Settings. e) Click the Label above Gauge drop-down list and select Severity 5 Event Count Name from the drop-down list. f) In the Critical spinner set a threshold value of 0 by typing 0. This threshold value causes any number of Severity 5 (critical) events to generate a Severity 5 value status gauge. 9. Click Save and Exit to save the page. 10. From the page action list, select Edit Page.

11. Click Show Wires and then, in the Summary of wires section of the window. click New Wire. 12. Specify the wires that connect the Network Views widget to the Dashboard Application Services Hub gauge group. • In the Select Source Event for New Wire window, click Network Views > NodeClickedOn, and then click OK. • In the Select Target for New Wire window, click Default > This page name_of_page > Event Viewer , where name_of_page is the name of the page that you created in step 2. • In the Transformation window, select Show Gauge Events, and then click OK 13. Close the Summary of wires section of the window by clicking the X symbol at the top right corner. 14. Click Save and Exit to save the page.

Results You can now click a network view in the network view tree in the Network Views widget and have the gauge group update to show three status values: Severity 3 (minor), Severity 4 (major), and Severity 5 (critical), together with a number within each status gauge indicating how many events at that severity are currently present on the devices in the selected network view.

Chapter 8. Operations 527 Event information for Network Health Dashboard widgets Refer to this table to get information about the publish events and subscribe events for Network Health Dashboard widgets. Use this event information when you create a new custom widget and you want to wire your custom widget with an existing Network Health Dashboard widget.

Table 81. Event information for Network Health Dashboard widgets Widget name Event type Event name Event description Configuration and Subscribe event NodeClickedOn Subscribes to a Event Timeline NodeClickedOn event and displays data based on the events ViewId and the datasource. Percentage Availability Subscribe event NodeClickedOn Subscribes to a NodeClickedOn event and displays data based on the events ViewId and the datasource. Network Manager Subscribe event NodeClickedOn Subscribes to a Polling Chart NodeClickedOn event and displays data based on the events ViewId and the datasource. Unavailable Resources Publish event showEvents Click a bar in the displayed graph and the widget publishes a showEvents event that contains the name of the transient filter. Subscribe event NodeClickedOn Subscribes to a NodeClickedOn event and displays data based on the events ViewId and the datasource.

Device Dashboard Use the Device Dashboard to troubleshoot network issues by navigating the network topology and seeing performance anomalies and trends on any device, link, or interface. The content of the Device Dashboard varies depending on whether you installed Network Performance Insight. • If Network Performance Insight is installed, then the Device Dashboard includes the Performance Insights widget. The Performance Insights widget shows performance measure values, anomalies, and trends for the selected entity. For more information, see “Monitoring performance data” on page 530. • If Network Performance Insight is not installed, then the Device Dashboard does not include the Performance Insights widget. Instead, it displays the Top Performers widget, showing performance measure values only for the selected entity. For more information, see “Displaying highest and lowest performers in a network view” on page 510. Related tasks Installing the Device Dashboard

528 IBM Netcool Operations Insight: Integration Guide Install the Device Dashboard to view event and performance data for a selected device and its interfaces on a single dashboard.

Troubleshooting network issues using the Device Dashboard Use the Device Dashboard to troubleshoot any network issue on a device, link, or interface.

Starting the Device Dashboard You can start the Device Dashboard from an event in the Event Viewer, from a device in the Network Views, Network Hop View, or Path Views. You can also start the Device Dashboard from the Top Performers widget within the Network Health Dashboard.

About this task Open the Device Dashboard using one of the following options: • Open the Device Dashboard from the Top Performers widget in the Network Health Dashboard, by completing one of the following tasks. – Press Ctrl and click a bar in the Chart view. – Click an entity name in the Table view. The Device Dashboard opens in a new Dashboard Application Services Hub tab. The Device Dashboard opens in the context of the device or interface that is associated with the element that you clicked. • Open the Device Dashboard from any of the topology GUIs. Right-click a device, link, or interface and click Performance Insight > Show Device Dashboard. The topology GUIs include the Network Hop View, Network Views, Path Views GUI, and Structure Browser. The Device Dashboard opens in a new tab, in the context of the device, link, or interface associated with the element that you clicked. • To open the Device Dashboard from the Event Viewer, right-click an event and click Performance Insight > Show Device Dashboard. The Device Dashboard opens in a new tab, in the context of the device or interface that is associated with the element that you clicked.

Changing Device Dashboard focus You can change the Device Dashboard focus to a different device or link.

Procedure 1. To switch focus to a device, click the device in the Device tab in the Topology widget. If the device is not shown in the Topology widget, complete the following steps. a) Search for the device by using the Network Hop View search feature. For more information, see https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/visualize/task/ vis_searchfordevices.html . The Topology widget is updated to center on the device that is selected in the search results. b) In the Topology widget, click the device of interest. 2. To switch focus to a link, click the link in the Device tab in the Topology widget. If the link is not shown in the Topology widget, complete the following steps. a) Search for a device at one end of the link by using the Network Hop View search feature. For more information, see https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/visualize/task/ vis_searchfordevices.html . The Topology widget is updated to center on the device that is selected in the search results. b) In the Topology widget, click the link of interest.

Chapter 8. Operations 529 Results All the widgets on the Device Dashboard update to show data for the device, interface, or link.

Monitoring performance data You can monitor performance metrics for a device, link, or interface using the Performance Insights widget within the Device Dashboard.

Procedure 1. Launch the Device Dashboard as described in “Starting the Device Dashboard” on page 529. 2. Ensure that the device, link, or interface of interest is selected. To change the dashboard context, see “Changing Device Dashboard focus” on page 529. 3. In the Performance Insights widget, proceed as follows: Select from the following controls to display performance metric values, anomalies, and trends in the Performance Insights widget. Note: Performance anomalies are determined based on the application of static thresholds to performance data. Anomaly detection is an early warning system that indicates that a device, interface, or link, needs attention. Severity This takes the form of a square in a color that indicates the anomaly severity for this performance metric. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Green: no anomaly. Links For each link displays the NCIM topology database entity identifiers of the devices at either end of the link. Roll over this field for more details of the devices, such as hostname and IP address. Connections Within each link, lists the connections that make up the link. Each connection is identified using the NCIM topology database entity identifiers of the interfaces at either end of the link. Roll over this field for more details of the interfaces, such as interface name, and interface description. Devices This tab contains performance data for the devices that you selected in the Topology portlet. If you selected interfaces, it shows the devices that contain them. If you selected links, it shows the devices at either end. If there are performance anomalies associated with any of the metrics, then the number of the worst severity metrics is shown in a colored square in the tab header. • A number in a red square indicates that there are higher severity metrics, and indicates the number of higher severity metrics. If a red square is showing, there might also be lower severity metrics. • A number in an orange square indicates that there are lower severity metrics, but no higher severity metrics, and indicates the number of lower severity metrics. Filter Type any string to filter the rows displayed in the table. Device Lists the device or devices selected in the Topology widget. Expand the device node to see the metrics associated with the device. Metric Lists the performance metrics available for the associated device.

530 IBM Netcool Operations Insight: Integration Guide Last 30 Minutes Displays a sparkline showing the trend of the performance metric over the last 30 minutes. The current value is shown using a colored dot at the end of the sparkline. The color of the dot indicates whether there is an anomaly associated with this performance metric. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Black: no anomaly. Severity Bullet graph, showing a central bar in a color that indicates the anomaly severity. The thresholds that have been applied to the performance metric are shown in different levels of gray shading in the wider bar. The color of the central bar specifies the anomaly severity. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Black: no anomaly. Value Indicates the value of the performance metric based on the most recent poll action. Links This tab contains performance metric anomaly and trend data for one or more links. In particular, it displays data for each connection on each of the selected links. Note: The Links tab only appears if you selected one or more links in the Topology widget. If there are performance anomalies associated with any of the metrics, then the number of the worst severity metrics is shown in a colored square in the tab header. • A number in a red square indicates that there are higher severity metrics, and indicates the number of higher severity metrics. If a red square is showing, there might also be lower severity metrics. • A number in an orange square indicates that there are lower severity metrics, but no higher severity metrics, and indicates the number of lower severity metrics. Filter Type any string to filter the rows displayed in the table. Links For each link displays the NCIM topology database entity identifiers of the devices at either end of the link. Roll over this field for more details of the devices, such as hostname and IP address. Connections Within each link, lists the connections that make up the link. Each connection is identified using the NCIM topology database entity identifiers of the interfaces at either end of the link. Roll over this field for more details of the interfaces, such as interface name, and interface description. Metric Lists the performance metrics available for the associated connection. Last 30 Minutes Displays a sparkline showing the trend of the performance metric over the last 30 minutes. The current value is shown using a colored dot at the end of the sparkline. The color of the dot indicates whether there is an anomaly associated with this performance metric. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Black: no anomaly.

Chapter 8. Operations 531 Severity Bullet graph, showing a central bar in a color that indicates the anomaly severity. The thresholds that have been applied to the performance metric are shown in different levels of gray shading in the wider bar. The color of the central bar specifies the anomaly severity. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Black: no anomaly. Value Indicates the value of the performance metric based on the most recent poll action. Interfaces Depending on the selection in the Topology widget, this tab contains performance data for the following interfaces:

If you selected in the Then this tab displays For more information, see Topology widget performance metric anomaly and trend data for One or more devices All the interfaces on these Interfaces tab: Content when a devices. device or interface is selected in the Topology widget One or more interfaces The selected interfaces. One or more links The interfaces at either end of Interfaces tab: Content when a the connections that make up link is selected in the Topology these links. widget

If there are performance anomalies associated with any of the metrics, then the number of the worst severity metrics is shown in a colored square in the tab header. • A number in a red square indicates that there are higher severity metrics, and indicates the number of higher severity metrics. If a red square is showing, there might also be lower severity metrics. • A number in an orange square indicates that there are lower severity metrics, but no higher severity metrics, and indicates the number of lower severity metrics. If there is performance data associated with the interface, you can view it by opening the Traffic Details widget in a new tab. Right-click an interface and select Show Traffic Details. Interfaces tab: Content when a device or interface is selected in the Topology widget Filter Type any string to filter the rows displayed in the table. Metric Drop-down list that lists all of the metrics available on the selected interfaces. Device Lists the device or devices that contain the interfaces selected in the Topology widget. Expand the device node to see the all the interfaces in that device and associated status and metric data. Interface Interface name of the selected interfaces or of the interfaces on the selected devices. Metric Performance metric selected from the Metric drop-down list. Last 30 Minutes Displays a sparkline showing the trend of the performance metric over the last 30 minutes. The current value is shown using a colored dot at the end of the sparkline. The color of the dot indicates whether there is an anomaly associated with this performance metric. • Red: indicates a higher severity anomaly.

532 IBM Netcool Operations Insight: Integration Guide • Orange: indicates a lower severity anomaly. • Black: no anomaly. Severity Bullet graph, showing a central bar in a color that indicates the anomaly severity. The thresholds that have been applied to the performance metric are shown in different levels of gray shading in the wider bar. The color of the central bar specifies the anomaly severity. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Black: no anomaly. Value Indicates the value of the performance metric based on the most recent poll action. Interfaces tab: Content when a link is selected in the Topology widget Filter Type any string to filter the rows displayed in the table. Links For each link displays the NCIM topology database entity identifiers of the devices at either end of the link. Roll over this field for more details of the devices, such as hostname and IP address. Connections Within each link, lists the connections that make up the link. Each connection is identified using the NCIM topology database entity identifiers of the interfaces at either end of the link. Roll over this field for more details of the interfaces, such as interface name, and interface description. Device Specifies the device at each end of the connection. Interface Specifies the interface at each end of the connection. Metric Performance metric selected from the Metric drop-down list. Last 30 Minutes Displays a sparkline showing the trend of the performance metric over the last 30 minutes. The current value is shown using a colored dot at the end of the sparkline. The color of the dot indicates whether there is an anomaly associated with this performance metric. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Black: no anomaly. Severity Bullet graph, showing a central bar in a color that indicates the anomaly severity. The thresholds that have been applied to the performance metric are shown in different levels of gray shading in the wider bar. The color of the central bar specifies the anomaly severity. • Red: indicates a higher severity anomaly. • Orange: indicates a lower severity anomaly. • Black: no anomaly. Value Indicates the value of the performance metric based on the most recent poll action.

Chapter 8. Operations 533 Displaying performance timelines You can view performance metrics for a device, link, or interface in the Performance Timeline portlet in the Device Dashboard.

About this task To display performance timelines, complete the following steps.

Procedure 1. Start the Device Dashboard as described in “Starting the Device Dashboard” on page 529. 2. Ensure that the device, link, or interface of interest is selected. To change the dashboard context, see “Changing Device Dashboard focus” on page 529. 3. In the Performance Insights portlet, click the sparkline that is associated with the metric for which you want to display a timeline. You might need to change to a different tab, for example, to the Interfaces tab, to find the metric that you want. The Performance Timeline portlet updates and displays a timeline of the performance metric that you clicked. The last 30 minutes of data are displayed on the upper timeline. By default, the last 12 hours of data are displayed on the lower timeline, which is called the viewfinder. You can configure the amount of data that is displayed in the viewfinder by updating the portlet configuration settings. The window in the viewfinder indicates which section of the longer timeline is being displayed in the upper timeline. 4. Optional: Investigate the performance data by performing the following actions. • To display a different time window, click in the viewfinder.The timeline updates to display the time period that you clicked. • To display a larger or smaller time window, drag the handles of the viewfinder window inwards or outwards. By default, the time period is 30 minutes. • To view extra information about a particular point, hover over the point in the timeline.

Displaying traffic data on an interface You can display data about the traffic that is flowing through an interface by opening the Traffic Details portlet from an interface in the Performance Insights portlet.

About this task To display traffic data for an interface, complete the following steps.

Procedure 1. Start the Device Dashboard as described in “Starting the Device Dashboard” on page 529. 2. Click the device of interest in the Device tab in the Topology portlet. 3. Click the Interfaces tab in the Performance Insights portlet. 4. Right-click an interface of interest and select Show Traffic Details.

Results The TrafficDetails portlet in Network Performance Insight opens, and shows traffic information for the selected interface.

534 IBM Netcool Operations Insight: Integration Guide Configuring the Performance Insights widget Both administrators and end users can configure the Performance Insights widget within the Device Dashboard.

About this task To configure the Performance Insights widget, perform the following steps.

Procedure

1. In the Performance Insights widget, click Edit Preferences . 2. To configure communication between the Performance Insights widget and Network Performance Insight, use the following controls: NPI Hostname Specify the hostname of Network Performance Insight server that provides performance data for this widget. Important: You must only ever change this setting based on instructions from a system administrator. Port Specify the port on the Network Performance Insight host. This is usually 9443. 3. To configure presentation and refresh of data on the Performance Insights widget, use the following controls: Time Period Specify how many minutes of sparkline data to display for each performance metric in the Performance Insights widget. Default value is 30 minutes. Refresh Rate Specify how often to refresh the data in the Performance Insights widget. Default value is 60 seconds. 4. Click OK. 5. For the changes to take effect, close the Device Dashboard and then re-open the Device Dashboard.

Configuring thresholds You can configure thresholds that determine when anomalies or events are raised.

About this task You can configure traffic flow thresholds, which raise Tivoli Netcool/OMNIbus events when the thresholds are breached. Traffic flow thresholds use data from Network Performance Insight. You can configure anomaly thresholds for performance measures. Anomaly thresholds use data from Network Manager.

Defining traffic flow thresholds You can view details about the traffic flowing across an interface in the Traffic Details portlet. This feature is available only for interfaces on which traffic flow is enabled.

About this task You can view details about the traffic flowing across an interface by opening the Traffic Details portlet from the Interfaces tab in the Performance Insights portlet. Right-click an interface that has traffic flow enabled and select Show Traffic Details. To view the flow data, you must first enable one or more devices to capture flow data, and configure one or more interfaces on those devices to capture flow data. Optionally, configure traffic flow thresholds to raise Netcool/OMNIbus events when the thresholds are breached.

Chapter 8. Operations 535 Flow protocols vary between router providers. For example, Cisco uses the NetFlow protocol. Flow is usually enabled on key interfaces, to avoid using large amounts of resources. Flow data is stored by Network Performance Insight. The Traffic Details portlet is a Network Performance Insight portlet. To enable traffic flow on interfaces, complete the following steps.

Procedure 1. Enable the collection of flow data on the devices of interest, using the appropriate protocol. For more information, see the following topic in the Network Performance Insight Knowledge Center: • Network Performance Insight: Configuring flow devices 2. Configure the collection of flow data on the interfaces of interest. For more information, see the following topic in the Network Performance Insight Knowledge Center: • Network Performance Insight: Configuring flow interfaces 3. Optional: Configure a traffic flow threshold on one or more interfaces. For more information, see the following topic in the Network Performance Insight Knowledge Center: • Network Performance Insight: Configuring flow thresholds Note: It is not necessary to set a flow threshold in order to view flow data.

Defining performance thresholds for anomaly detection If your Netcool Operations Insight solution is integrated with Network Performance Insight, then you can define static thresholds for anomaly detection. You define a static threshold for a given KPI within the poll definition that polls for that KPI.

About this task For full information on how to configure poll definitions, see the following topic in the Network Manager Knowledge Center: https://www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/poll/task/ crtpolldef.html .

About poll definitions and static thresholds You specify static thresholds on performance measures on devices and interfaces, by defining the thresholds within the relevant poll definition. If these static thresholds are violated for any given performance measure on a device or interface, Netcool/OMNIbus events will be generated at an appropriate severity level, and if the context of the Device Dashboard is switched to the relevant device or interface, then the anomaly value and any associated performance measure trends will be shown on the dashboard. Related tasks Network Manager V4.2 documentation: Poll definition parametersUse this information to understand the parameters of a Network Manager poll definition. Network Manager V4.2 documentation: Threshold polling During threshold polling, predefined formulas are applied to selected MIB variables, and if the threshold is exceeded by the MIB variable, then an event is generated. A Clear event is generated when the value of the MIB variable either falls below the threshold value, or falls below a different clear value.

Anomaly detection By defining static thresholds for a performance measure in the Network Manager Poll Definition Editor, you can optionally specify definitive threshold values. If these threshold values are overriden for the specified number of consecutive occurrences, then a performance anomaly with the appropriate severity level is generated.

536 IBM Netcool Operations Insight: Integration Guide Defining anomaly thresholds Define anomaly thresholds to detect anomalies in performance measures on devices, links, and interfaces, and display these anomalies in the Performance Insights.

Before you begin Anomaly thresholds are defined within poll definitions. Restriction: You can define anomaly thresholds only for Basic threshold, Chassis Ping, and Interface Ping poll definitions. The poll definition must be set to store historical data and specified within an active poll policy.

About this task To define an anomaly threshold, complete the following steps:

Procedure 1. Create a poll definition, or modify an existing poll definition. • To apply anomaly thresholds to performance measures such as CPU and memory utilization on devices, and incoming and outgoing bandwidth utilization on interfaces, create or modify a basic threshold poll definition. Creating a basic threshold poll definition is described in https:// www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/poll/task/poll_crtbasictholddef.html. • To apply anomaly thresholds to performance measures associated with ping operations against devices and interfaces, such as packet loss and ping time, create or modify a chassis or interface ping poll definition. Creating a chassis or interface ping poll definition is described in https:// www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/poll/task/ poll_creatingchassisandifpingpolldefs.html. When you define a new poll definition, all of the tabs in the Poll Definition Editor are editable, except for the NPI Anomaly Threshold tab, which is blank. Complete the other tabs but do not complete the NPI Anomaly Threshold. Complete this tab later in the procedure. 2. Ensure that the poll definition is specified in an active poll policy. a) If necessary, create a poll policy. For more information about creating poll policies, see https:// www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/poll/task/poll_crtpoll.html. b) Set the poll definition to store historical data, by selecting the Poll Policy Properties tab in the Poll Policy Editor and clicking the Store? check box next to the name of the poll definition. c) Enable the poll policy. For more information about enabling poll policies, see https:// www.ibm.com/support/knowledgecenter/SSSHRK_4.2.0/poll/task/poll_enablepoll.html. 3. Open the poll definition and define anomaly thresholds by following the instructions in the next step. 4. Click the NPI Anomaly Threshold tab and specify static thresholds for triggering performance measurement anomalies in the Device Dashboard, and associated Tivoli Netcool/OMNIbus events. a) In the Upper Limit and Lower Limit fields, specify the upper and lower threshold limits. b) In the Consecutive Occurrences spinner, specify the number of consecutive threshold violations that must occur before an anomaly and an event is generated. c) In the Type drop-down list, specify the threshold type. There are three types of threshold. In each threshold type, the current value is compared to the upper and lower threshold limits. Upper type threshold Use this type of threshold when the desired value of the performance measure is lower than the threshold limits. • If the current value exceeds the lower limit but is less than the upper limit for the specified number of consecutive occurrences, then a lower severity (orange) anomaly is generated on the Device Dashboard, and a Major Tivoli Netcool/OMNIbus event is generated.

Chapter 8. Operations 537 • If the current value exceeds the upper limit for the specified number of consecutive occurrences, a higher severity (red) anomaly is generated on the Device Dashboard, and a Critical Tivoli Netcool/OMNIbus event is generated. Lower type threshold Use this type of threshold when the desired value of the performance measure is higher than the threshold limits. • If the current value is less than the upper limit but higher than the lower limit for the specified number of consecutive occurrences, then a lower severity (orange) anomaly is generated on the Device Dashboard, and a Major Tivoli Netcool/OMNIbus event is generated. • If the current value is less than the lower limit, for the specified number of consecutive occurrences, then a higher severity (red) anomaly is generated on the Device Dashboard, and a Critical Tivoli Netcool/OMNIbus event is generated. Band type threshold Use this type of threshold when the desired value of the performance measure is within a band specified by the lower and upper threshold limits. If the current value falls outside of the band (either above or below the band) for the specified number of consecutive occurrences, a higher severity anomaly is generated on the Device Dashboard, and a Critical Tivoli Netcool/OMNIbus event is generated. 5. Click Save.

Administering the Device Dashboard Perform these tasks to administer the Device Dashboard for users.

Changing the Network Performance Insight server for all users If you need to change the Network Performance Insight server that provides data to the Device Dashboard, then you must perform the following steps,

Procedure 1. Edit the npi.properties file. This file is located by default at $NMGUI_HOME/profile/etc/tnm/npi.properties, where $NMGUI_HOME is by default /opt/IBM/netcool/gui/precision_gui. 2. In the npi.properties file change the following properties to point to the new Network Performance Insight server:

npi.server.name=

npi.server.port=

npi.host.name=https://:

Where is the host name of the Network Performance Insight node where the UI Service is installed and the number is 9443. 3. Instruct each end user to go to the Device Dashboard and change the portlet preferences there to point to the new Network Performance Insight server. End users must do this because the portlet preferences override the properties defined in the npi.properties file. 4. Stop and then restart the WebSphere Application Server.

Traffic Details dashboard Use the Traffic Details dashboard to monitor the network performance details of a particular interface. Network Performance Insight provides built-in and interactive dashboards that cover the entire traffic data representation. The Flow data that is collected by Network Performance Insight is shown from Traffic Details dashboard. It displays the traffic details at interface level.

538 IBM Netcool Operations Insight: Integration Guide You can launch the Traffic Details portlet that is available as a widget from the following dashboards: • From Network Health Dashboard The traffic details for an interface are populated in the Network View page, from a selected network view. • From Device Dashboard Right-click an interface of interest and select Show Traffic from the Interfaces tab in the Performance Insights portlet. • From Event Viewer or AEL Right-click a Flow event from the Event Viewer or the AEL and select Flow Dashboard, to view the traffic details for the event.

Traffic Details dashboard views Use this information to see the list of views that are available in the Traffic Details dashboard. Traffic Details dashboard populates the performance metrics based on the collected IP network traffic information as the packet enters or exits an interface of a device. This resource provides a view of the applications responsible for traffic volume, either inbound or outbound, through the interface over the selected time period. This data provides granular traffic details at interface level. You can view the Flow data that is collected by Network Performance Insight from Traffic Details dashboard. All the widgets in the dashboard display top 10 metric values for the resources. The Traffic Details dashboard views are as follows: • Top Applications • Top Applications with ToS • Top Autonomous System Conversations • Top Destination Autonomous Systems • Top Source Autonomous Systems • Top Conversations • Top Conversations with Application • Top Conversations with ToS • Top Destinations • Top Destinations with Application • Top IP Group Conversations • Top IP Group Conversations with Application • Top IP Group Conversations with Protocol • Top IP Group Conversations with ToS • Top Destination IP Groups • Top Destination IP Groups with Application • Top Destination IP Groups with Protocol • Top Destination IP Groups with ToS • Top Source IP Groups • Top Source IP Groups with Application • Top Source IP Groups with Protocol • Top Source IP Groups with ToS • Top Protocols • Top Protocols with Application • Top Protocols with Conversation

Chapter 8. Operations 539 • Top Protocols with Destination • Top Protocols with Source • Top QoS Hierarchies with Queue ID • Top Sources • Top Sources with Application • Top ToS Note: All the data on the Traffic Detail dashboard views is populated based on your local timezone. To view all the Top Talker resource views from Traffic Details dashboard, enable the Flow aggregations. For more information, see Configuring Flow aggregations. Related information Built-in aggregation definitions

Displaying NetFlow performance data from Network Health Dashboard IBM Networks for Operations Insight is an optional feature that can be added to a deployment of the base Netcool Operations Insight solution to provide service assurance in dynamic network infrastructures. The capabilities of Networks for Operations Insight include network discovery, visualization, event correlation and root-cause analysis, and configuration and Compliance Management that provide service assurance in dynamic network infrastructures. The Network Health Dashboard monitors a selected network view, and displays device and interface availability within that network view. It also reports on performance by presenting graphs, tables, and traces of KPI data for monitored devices and interfaces. Traffic Details portlet can be launched for the interfaces that are available in the Structured Browser. The Network Health Dashboard includes the Event Viewer for detailed event information. You can start the Traffic Details portlet to display the NetFlow traffic details for the interface that violated a set threshold value.

Adding the Network Performance Insight widget in Network Health dashboard Dashboards or pages, are an arrangement of one or more widgets in the work area and contain the widgets that are needed to complete tasks. Users whose roles have Editor or Manager access to a dashboard can edit a dashboard's layout and content. You can add multiple widgets in a screen. When you are adding widgets, you can also rearrange the widgets as needed.

About this task By default, the Network Health Dashboard page displays the Network View, Structure Browser , and Event Viewer widgets. To view the traffic details from Network Health Dashboard for the first time, you need to add the Network Performance Insight widget.

Procedure 1. Log in to Jazz for Service Management server.

2. Click the Incident icon ( ) and select Network Health Dashboard under Network Availability. 3. In the Network Health Dashboard, select a network view. A second tab, called Network View, opens.

4. In the tab bar, click Page Actions icon ( ) and select Edit Page. The dashboard is changed to show the widget palette and a series of buttons in the tab bar. The menu that is associated with the Edit options icon for each widget is updated so that you can edit its layout and content.

540 IBM Netcool Operations Insight: Integration Guide 5. Click the NPI folder from the widget palette. The NPI folder name is based on the launch-in-context tool that is created for Network Performance Insight. 6. Click and drag the Traffic Details widget from the palette. To assist you in positioning the widget, use the background layout grid. You can change the size of the layout grid and have widgets snap to the layout guide lines through the Layout button in the tab bar. 7. Click Save and Exit to exit the dashboard from the edit mode after you complete the editing. Related information Editing dashboard content and layout

Traffic Details from Network Health Dashboard Network Health Dashboard displays the traffic details of a particular network. The Traffic Details widget data is populated from NCIM view that is part of Network Performance Insight database structure, where it joins multiple tables into a single virtual table. NCIM view represents the subset of data that is discovered from the Tivoli Network Manager and Network Performance Insight Flow tables. The discovered data by Tivoli Network Manager is mapped with Flow records by using the following fields:

Table 82. Mappings table Flow table NCIM view exporter ip device ip address if index interface index

The following SNMP fields in the Traffic Details widget are populated from the NCIM view: Note: If the discovered interfaces are not mapped with Network Performance Insight flow data, you can't see the SNMP fields in the table on Traffic Details dashboard.

Table 83. SNMP fields in Traffic Details dashboard Traffic Details fields Description Mapping Device The device name. The exporter ip from Flow table is mapped to the device name in the NCIM view. This mapping results to the device name populated in the traffic details widget.

Index A unique number that is The if index from Flow table is mapped to the associated with the interface index in the NCIM view. physical or logical This mapping results to the index value interface. populated in the traffic details widget.

Description The interface name. The interfaces that are discovered by Tivoli Network Manager are mapped with the collected Network Performance Insight flow data as interface name. This mapping results to the description of the device that is populated in the traffic details widget.

Chapter 8. Operations 541 Table 83. SNMP fields in Traffic Details dashboard (continued) Traffic Details fields Description Mapping Speed The value of the traffic The speed from Flow table is mapped to the flow through network interface speed from NCIM view. interfaces, which This mapping results to the speed value measures the speed of the populated in the traffic details widget. data transferred.

Launching Traffic Details dashboard from Network Health dashboard The Traffic Details widget displays the details of an interface from a selected network device.

About this task Browse from Network Health dashboard to view the specific traffic details of an interface in the Traffic Details page. By default, the Traffic Details page displays the details for Top Ingress interfaces at Ingress level, Top Egress interfaces at Egress level. Whereas, Top Interfaces and all Top Networks display the traffic details as Both.

Procedure 1. Log in to Jazz for Service Management server.

2. Click the Incident icon ( ) and select Network Health Dashboard under Network Availability. The dashboard page populates the configured network devices. 3. Select a view from the Network Views bookmark that you configured from the Network Health Dashboard. The other widgets update to show information based on the network view that you selected. The Network View dashboard opens in another tab. This dashboard contains Network Views GUI, the Event Viewer, the Structure Browser , and the Traffic Details, and it displays the selected network view. 4. Double-click a network from the Network View. For example, double-click All Routers. 5. Click an entity or device from the Network View. The selected entity details are displayed on the Structure Browser .

6. Click the Show Interfaces icon ( ) from the Structure Browser . List of interfaces for the entity is displayed. 7. Click an interface. The traffic details data with the interface details is displayed on the Traffic Details. 8. Select an entity from View list. For information on monitoring traffic details, see “Monitoring NetFlow performance data from Traffic Details dashboard” on page 543.

9. Optional: Click the Maximize icon ( ) from the upper right of the Traffic Details widget. The traffic details dashboard is displayed in full screen mode.

542 IBM Netcool Operations Insight: Integration Guide Displaying NetFlow performance data from Event Viewer You can monitor and manage network performance from events that are generated by Tivoli Netcool/ OMNIbus on Web GUI.

About this task You can access the events from the following widgets: • Managing events in the Event Viewer Use the JavaScript Event Viewer to monitor and manage events. You can access Event Viewers in any page in Dashboard Application Services Hub that hosts an Event Viewer widget. • Monitoring events in Active Event Lists The Active Event List (AEL) is an interactive Java applet for displaying alert data from the ObjectServer. Communication between the ObjectServer and the AEL is bidirectional. The AEL presents alert information from the alerts.status table in the ObjectServer to operators. Operators can perform actions against alerts such as changing the results from the alert properties in the alerts.status table from the AEL.

Procedure 1. Log in to Jazz for Service Management server.

2. Click Incident ( ) > Events > Event Viewer in the navigation bar. 3. Right-click an event that is labeled as NPI/Flow under the Manager column from the Event Viewer and select one of the following commands. Both of these commands launch the Traffic Details dashboard. • Flow Dashboard • Performance Insight > Show Traffic The Traffic Details dashboard that is associated with the selected event is displayed in another window. Note: You can launch the Traffic Details dashboard from Flow events only. Flow events are marked in the Event Viewer using the value NPI/Flow in the Manager column.

4. Optional: Right-click a Flow event from Incident ( ) > Events > Active Event List (AEL) and select Flow Dashboard. The Traffic Details dashboard that is associated with the selected event is displayed in another window. Related information Configuring launch-in-context integration with Network Performance Insight Event severity levels

Monitoring NetFlow performance data from Traffic Details dashboard After launching the Traffic Details dashboard, you can display the different views of NetFlow data for a selected interface.

About this task By default, the Traffic Details page displays the data about the top ingress, top egress, or both flows for an interface.

Procedure 1. Select Traffic Details for Ingress, Egress, or Both from the list.

Chapter 8. Operations 543 2. Select any dashboard view from the View list. For more information about dashboard views, see “Traffic Details dashboard views” on page 539. 3. Click to select a start date from the Start field.

4. Click to select start time. 5. Click to select an end date from the End field.

6. Click to select end time. 7. Click Update to update the details for selected date and time. Two red lines are displayed in the graph that notify the Upper Threshold and Lower Threshold limits are crossed. When you hover over the area charts, a tooltip is flashed giving you the details for that particular source. The graphical presentation of data is updated for the selected date and time.

8. Click to refresh the page. 9. Select one or more interfaces from the legend on the right. It displays the top 10 interface details. 10. Clear the check box for a particular interface. The details for that interface are hidden. 11. Check the Remaining check box. It displays the details for the remaining traffic on the interfaces. Note: If the upper limit and lower limit of a threshold is crossed, two extra check boxes for Upper Threshold and Lower Threshold are seen in the legend. The table at the bottom displays top 10 interface details. The table displays the following details for an interface: Rank Display the number in ascending order. Grouping Displays the interface for which you are viewing the data. For example, if you select Top Protocols from the View list, the grouping that is displayed is for Protocol. The columns change depending on the view selected. The main grouping keys for the Traffic Details dashboard can be defined as:

Table 84. Grouping Grouping Key Description Application Applications are mapped based on port, protocol, and IP address or network. Destination Destination can be a host computer to which the network flow comes from a source computer.

Destination AS Autonomous systems that a specific route passes through to reach one destination.

Destination IP Group A group of destinations that you want to control by specifying the IP addresses. A grouping of endpoints for traffic accounting, billing purpose.

544 IBM Netcool Operations Insight: Integration Guide Table 84. Grouping (continued) Grouping Key Description Protocol It is a standard that is defined on how a network conversation is established. It delivers the packets from the source host to the destination host. Source Source is the IP address from which traffic is originated.

Source AS Autonomous systems that a specific route passes through from a source to reach one destination.

Source IP Group A group of sources that you want to control by specifying the IP addresses. A grouping of endpoints for traffic accounting, billing purpose.

Source ToS The class of service, which examines the priority of the traffic.

QoS Hierarchy QoS behavior at multiple policy levels, which provides the visibility of how the defined traffic classes are performing.

QoS Queue ID QoS Queues provide bandwidth reservation and prioritization of traffic as it enters or leaves a network device.

Octets Displays the amount of data that is used in KB and Bytes. Percentage Percentage of traffic on the grouping that occupies the traffic. Note: If the top 10 interface table is not shown, reduce the zoom percentage of your current browser. Related information Built-in aggregation definitions

Network Performance Insight Dashboards After the system is configured as per your requirements, Network Performance Insight can start collecting, aggregating, and storing the network performance data. The data is rendered on various ready- to-use dashboards that it offers. Categories of Network Performance Insight Dashboards: • Network Performance Overview dashboard • Network Performance Overview by Deviation dashboard • “WiFi Overview dashboard” on page 571 • “IP Links Performance Overview dashboard” on page 576 • “Load Balancer dashboards” on page 580 • NetFlow dashboards • On Demand Filtering dashboards Network Performance Insight Dashboards can be grouped functionally as follows: • Operational dashboards • Analytical dashboards

Chapter 8. Operations 545 • Strategic dashboards • On-demand dashboards Important: When Network Performance Insight is used to collect, aggregate, and render the NetFlow data alone, the following dashboards are applicable. • “NetFlow dashboards” on page 602 • On Demand Filtering - Flow dashboard The following dashboards that display performance data are not applicable in this scenario: • Network Performance Overview dashboard • Network Performance Overview by Deviation dashboard • “WiFi Overview dashboard” on page 571 • On Demand Filtering - IPSLA dashboard • On Demand Filtering - Device Health dashboard • On Demand Filtering - HTTP Operations dashboard • On Demand Filtering - Timeseries Data dashboard

Getting started with Network Performance Insight Dashboards This information provides instructions and general information on how to use the Network Performance Insight Dashboards that render network performance data from Network Performance Insight. The Network Performance Insight Dashboards report the network performance data that is gathered and stored by the Network Performance Insight and its components. Network Performance Insight Dashboards are federated on Dashboard Application Services Hub portal and you can derive the following information from them: • Summary level views of the network and see how your network resources are performing. • Detailed views from the listener and drill-down widgets. You can switch between different metric views to analyze and monitor your network. • Monitor Top Talker resource views that help to understand network insights at a more granular level. • Share this information with other stakeholders by generating and sending the report information in PDF, CSV, or XLS format.

Accessing the Network Performance Insight Dashboards Access the Network Performance Insight Dashboards from Dashboard Application Services Hub portal.

Procedure 1. Log in to Dashboard Application Services Hub portal with npiadmin and netcool credentials.

2. Click Console Integrations icon ( ) in the navigation bar and select Dashboards under Performance. The page loads with menu bar to navigate to different Network Performance Insight Dashboards. a) Click Home to see the two types of Network Performance Overview dashboards from the menu bar. • Network Performance Overview Network Performance Overview: Top 10 dashboard offers summary level current view of the managed network. • Network Performance Overview by Deviation Network Performance Overview by Deviation: Top 10 Deviations dashboard offers summary level deviation view of the managed network. • WiFi Overview

546 IBM Netcool Operations Insight: Integration Guide WiFi Overview dashboard offers summary level current view of WiFi network b) Click NetFlow to see Network Traffic Overview, Applications Response Overview dashboards and all the NetFlow built-in Top N resource views. You can see the following dashboards:

Dashboard group Available views Network Traffic Overview Network Traffic Overview: Top 10 Traffic dashboard that contains various widgets.

Applications Response Overview Applications Response Overview: Top 10 dashboard that contains various widgets. • Applications Response Time

Applications • Top Applications • Top Applications with ToS

Conversations • Top Conversations • Top Conversations with Application • Top Conversations with ToS

Sources • Top Sources • Top Sources with Application

Protocols • Top Protocols • Top Protocols with Application • Top Protocols with Conversation • Top Protocols with Destination • Top Protocols with Source

Destinations • Top Destinations • Top Destinations with Application

ToS • Top ToS

Autonomous Systems • Top Autonomous System Conversations • Top Destination Autonomous Systems • Top Source Autonomous Systems

QoS Queue • QoS Queue Drops

Chapter 8. Operations 547 Dashboard group Available views

IP Address Grouping • Top IP Group Conversations • Top IP Group Conversations with Application • Top IP Group Conversations with Protocol • Top IP Group Conversations with ToS • Top Destination IP Groups • Top Destination IP Groups with Application • Top Destination IP Groups with Protocol • Top Destination IP Groups with ToS • Top Source IP Groups • Top Source IP Groups with Application • Top Source IP Groups with Protocol • Top Source IP Groups with ToS

c) Click On Demand Filtering to see the following views: • Device Health • IPSLA • Flow • HTTP Operations • Timeseries Data

Accessing the Network Performance Insight Dashboards Access the Network Performance Insight Dashboards from Dashboard Application Services Hub portal.

Procedure 1. Log in to Dashboard Application Services Hub portal with npiadmin and netcool credentials.

2. Click Console Integrations icon ( ) in the navigation bar and select Dashboards under Performance. The page loads with menu bar to navigate to different Network Performance Insight Dashboards. a) Click Home to see the two types of Network Performance Overview dashboards from the menu bar. • Network Performance Overview Network Performance Overview: Top 10 dashboard offers summary level current view of the managed network. • Network Performance Overview by Deviation Network Performance Overview by Deviation: Top 10 Deviations dashboard offers summary level deviation view of the managed network. • WiFi Overview WiFi Overview dashboard offers summary level current view of WiFi network • IP Links Performance Overview The IP links performance is monitored through a heat map, which provides an overall view of outbound one way link performance for speech applications between a source and destination server present at various geographical locations. • Load Balancer

548 IBM Netcool Operations Insight: Integration Guide Under this category, the following dashboards are available: – Load Balancer Overview – GTM Details – LTM Details – Virtual Server Details – Pool Details – Pool Member Details b) Click NetFlow to see Network Traffic Overview, Applications Response Overview dashboards and all the NetFlow built-in Top N resource views. You can see the following dashboards:

Dashboard group Available views Network Traffic Overview Network Traffic Overview: Top 10 Traffic dashboard that contains various widgets.

Applications Response Overview Applications Response Overview: Top 10 dashboard that contains various widgets. • Applications Response Time

Applications • Top Applications • Top Applications with ToS

Conversations • Top Conversations • Top Conversations with Application • Top Conversations with ToS

Source • Top Sources • Top Sources with Application

Protocols • Top Protocols • Top Protocols with Application • Top Protocols with Conversation • Top Protocols with Destination • Top Protocols with Source

Destinations • Top Destinations • Top Destinations with Application

ToS • Top ToS

Autonomous Systems • Top Autonomous System Conversations • Top Destination Autonomous Systems • Top Source Autonomous Systems

QoS Queue • QoS Queue Drops

Chapter 8. Operations 549 Dashboard group Available views

IP Address Grouping • Top IP Group Conversations • Top IP Group Conversations with Application • Top IP Group Conversations with Protocol • Top IP Group Conversations with ToS • Top Destination IP Groups • Top Destination IP Groups with Application • Top Destination IP Groups with Protocol • Top Destination IP Groups with ToS • Top Source IP Groups • Top Source IP Groups with Application • Top Source IP Groups with Protocol • Top Source IP Groups with ToS

c) Click On Demand Filtering to see the following views: • Device Health • IPSLA • Flow • HTTP Operations • Timeseries Data

Generic functions of Network Performance Insight Dashboards Use this information to understand the generic interactivity and filtering functions that are available on Network Performance Insight Dashboards.

Procedure • Generic interactivity that is applicable for all Network Performance Insight Dashboards:

a) Click Auto Refresh ( ) icon to enable or disable auto refresh. Note: By default, the data is auto refreshed every one minute If auto refresh option is enabled, the dashboard metrics are refreshed with latest values.

b) Click Email PDF ( ) icon to email the dashboard view as a PDF. Complete the following steps to email a PDF: a. In the Email the PDF file window that is displayed, enter the following details: 1) Subject 2) Email To Note: Ensure that the email address is valid and in correct format. 3) Content b. Click Send

c) Click Save As ( ) icon and select PDF, CSV, or XLS to save and export the dashboard to the selected file format. Note:

550 IBM Netcool Operations Insight: Integration Guide – In a PDF file format, the complete data is populated in the next page on a tabular format. You need to click the grid to view the complete data. – PDF option is not available for On Demand Filtering dashboards.

d) To maximize the widget display, click .

e) To minimize the widget display, click . • Generic interactivity that is applicable for only the WiFi Overview, NetFlow and On Demand Filtering dashboards and widgets:

a) To change to a different chart type, click and select a different chart type from the widget. The widget renders according to the selected chart type.

b) To hide the widget, click .

c) To display the widget, click .

d) From the Datasource icon ( ), click to list the options to choose other performance metric. Note: The Datasource icon is available on Network Performance Overview, Network Performance Overview by Deviation, and NetFlow dashboards widgets. e) Click the column header from any grid widgets to sort in ascending or descending order. You can sort the data in numerical or alphabetical, depending on what type of data is populated in the grid widget.

f) The following icon on a widget indicates that it is a drill-down chart. Click one of the elements from the chart widget. For example, in a bar chart, click one of the bars to further drill down to one hierarchy level down which correspond to the bar that you selected. • Widget-level filters for WiFi Dashboards Widget-level filters are available for the Grid chart type. In WiFi dashboards, the two widgets, Access Points Details and WiFi Network Details are represented in the table format. Filter details:

1. Click the icon that is available besides the column name. A pop-up window that has a list, a field for entering filter inputs, and two buttons Apply Filter and Clear Filter appear. 2. From the first list field, select any one of the 6 values: – Equals: Select this value if you want to include any column value in the filter results. – Not Equal: Select this value if you want to exclude any column value in the filter results. – Starts With: Select this value if you want to filter the column values that start with a particular letter. – Ends With: Select this value if you want to filter the column values that end with a particular letter – Contains: Select this value if you want to filter the column value that is based on a particular letter or a sequence of letters it contains. – Not Contains: Select this value if you want to filter the column value based on a particular letter or a sequence of letters it does not contain. 3. In the field, enter the value for which you want to apply the filter.

Chapter 8. Operations 551 4. After you enter the filter value in the field above, two grouping options and another list and filter field appear. You can group the filter results of the two list and field values. The grouping options are: – AND: Conditions from both the first and the second list and field pair must match with the values of column. – OR: Conditions from any one of the two list and field pairs must match with the values of column. 5. Click Apply Filter. 6. Click Clear Filter and then Apply Filter to see the original table. • TimeZoom TimeZoom is a feature enhancement for Line charts to make them more readable. When the data points are too many at a particular time stamp that it gets difficult to identify and distinguish the data points from one another, TimeZoom feature increases the time granularity and zooms into the chart to sparsely place the data points. For instance, if the time stamp records data at an hourly rate, applying the TimeZoom displays the data points that are recorded at a shorter interval say half hour and place the data points at a comprehensible distance. TimeZoom is represented in the form of a horizontal scroll bar, which can zoom out and in as you move it. It is placed at the bottom of the widget. Following widgets are enhanced with Timezoom feature: – Wifi Client Count (drill down from Wifi Overview) – Wifi Interference and Noise Performance (drill down from Wifi Overview) – Device Health History – IPSLA History – HTTP Response Time – Timeseries Data – Application Response Time (drill down from Application Response Overview) • Filtering options that are applicable for Network Performance Insight Dashboards. The filter options and list differ based on the different types of Network Performance Insight Dashboards.

Table 85. Filter options Filter name Filter description Network Performance Insight Dashboards

Aggregation Type The list contains the available On Demand Filtering Flow NetFlow dashboards. AP Radio Channel The list contains available AP WiFi Overview Dashboard radio Channels in the WiFi Network.

Application The list contains the available Application Response Time applications.

Business Hour The list contains the peak or a Network Performance Overview non-peak business hours Network Performance Overview option. by Deviation You can select either Business Network Traffic Overview Hour or Non-Business Hour or both as ALL. All Load Balancer dashboards IP Links Performance Overview

552 IBM Netcool Operations Insight: Integration Guide Table 85. Filter options (continued) Filter name Filter description Network Performance Insight Dashboards Controller The list contains the controllers WiFi Overview Dashboard that are connected to the WiFi network. Destination The list contains the destination Source and Destination Details IP addresses for the specific source IP addresses.

KPI The list contains the available – On Demand Filtering Device performance metrics. Health – On Demand Filtering HTTP Operations – On Demand Filtering IPSLA

LTM The list contains the available – Pool Details LTM devices. – Pool Member Details – Virtual Server Details

Site The list contains the locations to Network Performance Overview view the traffic levels. Network Performance Overview by Deviation Network Traffic Overview All Load Balancer dashboards IP Links Performance Overview

SLA Test The IPSLA operations (SLA Test) On Demand Filtering IPSLA list contains options from the following functional areas: – Availability monitoring – Network monitoring – Application monitoring – Voice monitoring – Video monitoring

Source The list contains the configured On Demand Filtering HTTP or discovered devices. Operations On Demand Filtering IPSLA IP Links Performance Overview

Sort By Select from the list to sort the On Demand Filtering Device result by top or bottom. Health On Demand Filtering HTTP Operations On Demand Filtering IPSLA

Chapter 8. Operations 553 Table 85. Filter options (continued) Filter name Filter description Network Performance Insight Dashboards

Target The list contains the available Application Response Time target servers.

Time Period The dashboard data is Network Performance Overview populated based on the Network Performance Overview Network Performance Insight by Deviation server time zone. WiFi Overview – Last Hour, filters the last 1 hour of the current time. Network Traffic Overview – Last 6 Hours, filters the last 6 All Load Balancer dashboards hours of the current time. IP Links Performance Overview – Last 12 Hours, filters the last 12 hours of the current time. Source and Destination Details – Last 24 Hours, filters the last Applications Response 24 hours from the current Overview time and day. Applications Response Time – Last 7 Days, filters the last 7 days from the current time NetFlow top talker dashboards and day. On Demand Filtering Device – Last 30 Days, filters the last Health 30 days from the current time On Demand Filtering Flow and day. On Demand Filtering HTTP – Last 365 Days, filters the last Operations 365 days from the current time and day. On Demand Filtering IPSLA – Custom, from the Time On Demand Filtering Timeseries Period Selection, you can Data filter based on a specific date and time range. Note: View the start and end time on – For Network Performance the dashboard title bar. The Overview by Deviation and start and end time is displayed Network Traffic Overview according to the filter you dashboards, only Last Hour select. and Last 24 Hours time period filter options are made For example, if the current time available. is 3.00 PM on 11/13/17 and you – For Network Performance select the filter for Last 24 Overview dashboard, only Hours. The dashboard displays Last Hour, Last 24 Hours, the time period as: 11/12/17, Last 7 Days, Last 30 Days 3:00 PM - 11/13/17, 2:59 PM and Custom time period filter . options are made available.

Top N From the list, you can choose N On Demand Filtering Flow number of Top Performers to All Load Balancer dashboards compare historical poll data across multiple entities and metrics in a selected network view.

554 IBM Netcool Operations Insight: Integration Guide Table 85. Filter options (continued) Filter name Filter description Network Performance Insight Dashboards

Resource Type The list contains the available On Demand Filtering Timeseries types of resources. Data

Resource Name The list contains the available On Demand Filtering Timeseries resource names. Data

What to do next Click Apply Filter to apply the filter selection. The dashboard reloads the data according to the filter selections.

Site and Business hour filter condition Conditions that are applicable for the Site and Business Hour filters. These filter options are available on Network Performance Overview, Network Performance Overview by Deviation, Network Traffic Overview, and Load Balancer dashboards. The table describes the conditions that can be applied to the filters and the time periods that are based on it.

Table 86. Site and Business Hour filter conditions Filter options Dashboard Condition Site Business Hour Time Period

ALL ALL Last Hour When you select ALL from the Site filter Note: Last Hour is option, both business available only hour and non-business when you select hour data is queried and All for both Site displayed. and Business Hour. Last 24 Hours Last 7 Days Network Performance Last 30 Days Overview Custom

ALL Last 24 Hours When you select a site from the Site filter Business Hour Last 7 Days option, the Time Period Non-Business Last 30 Days filter option list starts Hour from last 24 hours and Custom more. Note: Use Custom filter to get more granularity.

Chapter 8. Operations 555 Table 86. Site and Business Hour filter conditions (continued) Filter options Dashboard Condition Site Business Hour Time Period

ALL ALL Last Hour When you select ALL from the Site filter Last 24 Hours option, both business Network hour and non-business Performance hour data is queried and Overview by displayed. Deviation Network Traffic • ALL Last 24 Hours When you select a site Overview • Business Hour from the Site filter option, the Time Period • Non-Business filter option list displays Hour only last 24 hours.

ALL • Last Hour When you select a site • Last 6 Hours from the Site filter option, the Time Period • Last 12 Hours filter option list displays • Last 24 Hours only last 24 hours. • Last 7 Days ALL • Last 30 Days • Last 365 Days • Custom Note: Use Custom filter to Load Balancer get more dashboards granularity.

• ALL • Last 24 Hours When you select a site • Business Hour • Last 7 Days from the Site filter option, the Time Period • Non-Business • Last 30 Days filter option list starts Hour • Last 365 Days from last 24 hours and • Custom more. Note: Use Custom filter to get more granularity.

Related information

Network Performance Overview dashboards Network Performance Overview summarizes a high-level view of your network, application, and device performance data in a single location. It gives you full control over how you investigate and analyze the measurements. You can navigate to the detailed views from here that focus on specific diagnostics. Network Performance Overview dashboards provide advanced monitoring of your network from a single pane of glass and you can drill down to specific details from here. Two types of overview dashboards are available that display data by current values and by deviation.

556 IBM Netcool Operations Insight: Integration Guide All the widgets in the overview dashboards display the metrics by current values for top 10 NetFlow enabled resources in your network. You can access network traffic congestion, network traffic utilization, application performance, quality of service, and device performance easily and quickly from the widgets in this dashboard for the following areas: Congestion Network congestion can occur due to high data volumes and not enough capacity to support the applications that are involved. It can lead to delays and packet drops and cause brownouts. Some of the reasons for congestion might be as follows: • Network data in the route is more than the network capacity. • Poor network design • Inefficient internet routing. Border Gateway Protocol (BGP) that sends all traffic through shortest logical path is not congestion aware and transit paths can become overloaded. • Incorrect QoS configuration or no QoS implementation Typical effects include queuing delays, packet loss, or refusing new connections to the resource. From these widgets, you can understand which interfaces are experiencing congestion by observing the inbound and outbound packet discard deviations. This data can be correlated with QoS queue drops and total application response time. • When a router receives NetFlow data at a rate faster than it can transmit, the device buffers the extra traffic until bandwidth is available. This buffering process is called queuing. You can use QoS to define the priorities to transmit the records from the queue. • The application response time is represented as total delay, which is the sum of max client network delay, max server network delay and application delay. Quality of Service When real-time data is passed in a network (for example, video, audio, or VoIP), implementing the Quality of Service is crucial for a good user experience. These dashboards help to understand the round-trip time for the successful echo operations. You can use this dashboard to troubleshoot issues with business-critical applications by determining the round-trip delay times, testing connectivity to the devices, and probe loss. Voice performance is monitored with the help of the widgets in the Quality of Service section. Traffic To ensure adequate network bandwidth for users to use the business-critical applications, you can correlate bandwidth utilization on different interfaces and the applications that consume the bandwidth. You can then see the applications that are impacted on the interfaces with high- bandwidth utilization. Device load Faulty devices can create a bottleneck to your network. High CPU utilization and memory loads can adversely affect the performance of the device and network. To understand the critical health of the monitored devices, the Device load widgets are useful.

Network Performance Overview dashboard All the widgets in this dashboard display the metrics by current values for top 10 resources in your network.

Available widgets and their interactivity The master-listener and drill-down widget interactions can be seen in the following ways:

Chapter 8. Operations 557 • For any widgets with the ( ) drill-down icon, from the chart, click any bar that represents the interface or metric to open the drill-down dashboard page, which contains the associated widgets. • For the other widgets that are shown in the diagram to be having a master-listener interaction, click any of the bars that represent the interface to change the listener widget that displays the related data for the selected interface. The diagram shows the master-listener, and drill-down interactions between the available widgets:

Network Performance Overview 1. Click Home > Network Performance Overview. The Network Performance Overview: Top 10 dashboard loads. This dashboard displays the metrics values for top 10 resources. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Time Period, the list contains of Last Hour, Last 24 Hours, Last 7 Days, Last 30 Days and Custom. Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options.

558 IBM Netcool Operations Insight: Integration Guide Table 87. Widget interactions Area monitored Master widgets Listener widgets Drill-down dashboard

Congestion Top 10 Outbound Total Packet Drops Per QoS Queue Drops Packet Discard (Packets) Queue (Packets) Click any bar that Click Find out more represents the interface details about Packet outbound packet Drops per Queue. Click discard to refresh the here link to open the listener widgets to QoS Queue Drops display the related data dashboard in a new tab. for the selected interface. Top 10 Applications by Applications Response Total Delay (ms) Time Click any bar that represent an application in Top 10 Applications by Total Delay (ms), the Application Response Time page opens in a new tab.

Top 10 Inbound Packet N/A N/A Discard (Packets)

Chapter 8. Operations 559 Table 87. Widget interactions (continued) Area monitored Master widgets Listener widgets Drill-down dashboard

Quality of Service Top 10 Round-Trip Time N/A (ms) Click any bar that represents the round- trip time between a source and destination device IP addresses to open the IPSLA History dashboard in a new tab.

Top 10 Probe Loss (%) N/A Click any bar that represents the probe loss between a source and destination device IP addresses to open the IPSLA History dashboard in a new tab.

Top 10 VoIP Inbound VoIP Voice Quality IPSLA History Jitter (ms) The widget displays the or following metrics for the same devices: Top 10 VoIP Outbound Jitter (ms) • MOS Click any bar that • IcPIF represents the VoIP Click Source - Inbound or Outbound Destination: Jitter between a source - and destination device to open IP addresses to refresh the IPSLA History the listener widgets. dashboard in a new tab. Note: Click the Data Source icon ( ) to toggle between Top 10 VoIP Inbound Jitter (ms) and Top 10 VoIP Outbound Jitter (ms) widgets.

560 IBM Netcool Operations Insight: Integration Guide Table 87. Widget interactions (continued) Area monitored Master widgets Listener widgets Drill-down dashboard

Traffic Top 10 Outbound Top 10 Applications by Utilization (%) Total Volume (Octets) Click any bar that Top 10 Applications by represents the interface Average Utilization (%) outbound utilization to Click Find out more refresh the listener details about the widgets to display the Application. Click here related data for the link to open the Top selected interface. Applications: Traffic Top Applications Volume Details for Top 10 Inbound Interface dashboard a Utilization (%) new tab. Click any bar that represents the interface inbound utilization to refresh the listener widgets to display the related data for the selected interface.

Device load Top 10 CPU Load (%) N/A Click any bar that represents the CPU Load on a resource, the Device Health History dashboard for the same resource opens in a new tab. Device Health History Top 10 Memory Load N/A (%) Click any bar that represents the Memory Load on a resource, the Device Health History dashboard for the same resource opens in a new tab.

Table 88. Available widgets Widget name Chart type Typical uses

Top 10 Outbound Packet Discard Bar Measures the number of outbound packets that (Packets) are discarded even though no errors are detected to free the buffer space. It might be due to resource limitations on the outbound interface.

Total Packet Drops Per Queue (Packets) Bar Measures the number of packet drops per traffic class queue per interface.

Chapter 8. Operations 561 Table 88. Available widgets (continued) Widget name Chart type Typical uses

Top 10 Inbound Packet Discard Bar Measures the number of inbound packets that (Packets) are discarded even though no errors are detected to free the buffer space. It might be due to resource limitations on the inbound interface.

Top 10 Applications by Total Delay (ms) Bar Measures maximum total time that it takes an application to respond to user requests. The total delay is the sum of max client network delay, max server network delay, and application delay.

Top 10 Round-Trip Time (ms) Bar Measures the time that is taken between sending a UDP echo request message from a source device to the destination device and receiving a UDP echo reply from the destination device. It is useful in troubleshooting issues with business-critical applications by determining the round-trip delay times and testing connectivity between devices.

Top 10 Probe Loss (%) Bar Measures the percentage of probes lost. it shows the completion status of an RTT operation.

Top 10 VoIP Inbound Jitter (ms) Bar Measures the variance in latency over time in incoming direction between two devices in your network.

Top 10 VoIP Outbound Jitter (ms) Bar Measures the variance in latency over time in outgoing direction between two devices in your network.

562 IBM Netcool Operations Insight: Integration Guide Table 88. Available widgets (continued) Widget name Chart type Typical uses VoIP Voice Quality Badge Measures voice performance that is important for delivering voice quality services. This widget shows two metrics: • Mean Opinion Scores (MOS). A common benchmark to determine voice quality is MOS. With MOS, a wide range of listeners can judge the quality of voice samples on a scale of 1 (bad quality) to 5 (excellent quality). The Cisco IOS implementation of MOS takes RTT delay and packet loss into the MOS calculation. However, jitter is not included. The following colors on the widget denote the severity of degradation of voice quality: – Green, when the MOS value is greater than 4 and less than or equal to 5 – Amber, when the MOS value is greater than 2 and less than or equal to 4 – Red, when the MOS value is greater than 1 and less than or equal to 2 • Impairment/Calculated Planning Impairment Factor (IcPIF) IcPIF values are expressed in a typical range of 5 (low impairment) to 55 (high impairment). Typically, IcPIF values that are numerically less than 20 are considered adequate. The following colors on the widget denote the severity of degradation of voice quality: – Green, when the IcPIF value is greater than 0 and less than or equal to 14 – Amber, when the IcPIF value is greater than 14 and less than or equal to 34 – Red, when the IcPIF value is i greater than 34 and less than or equal to 93

Top 10 Outbound Utilization (%) Bar Measures outbound bandwidth utilization for the highest Outbound Packet Discards on the interfaces.

Top 10 Applications by Total Volume Bar Measures the corresponding applications with (Octets) highest bandwidth utilization by volume.

Top 10 Inbound Utilization (%) Bar Measures the bandwidth utilization for incoming traffic for the highest Inbound Packet Discards on the interfaces.

Chapter 8. Operations 563 Table 88. Available widgets (continued) Widget name Chart type Typical uses

Top 10 Applications by Average Bar Measures the corresponding applications with Utilization (%) highest bandwidth utilization by percentage.

Top 10 CPU Load (%) Bar Measure the total CPU utilization in percentage across all vendor devices in your network. Currently, Network Performance Insight supports the following vendors: • Cisco • Huawei • Juniper

Top 10 Memory Load (%) Bar Measure the total memory usage in percentage across all vendor devices in your network. Currently, Network Performance Insight supports the following vendors: • Cisco • Huawei • Juniper

Network Performance Overview by Deviation dashboard For all the widgets that display the metrics by deviation, the values are calculated by computing deviation for the current data that is compared against an average value on the same day of week over the last four weeks.

Available widgets and their interactivity The master-listener and drill-down widget interactions can be seen in the following ways:

• For any widgets with the ( ) drill-down icon, from the chart, click any bar that represents the interface or metric to open the drill-down dashboard page, which contains the associated widgets. • For the other widgets that are shown in the diagram to be having a master-listener interaction, click any of the bars that represent the interface to change the listener widget that displays the related data for the selected interface. The diagram shows the master-listener, and drill-down interactions between the available widgets:

564 IBM Netcool Operations Insight: Integration Guide Network Performance Overview 1. Click Home > Network Performance Overview. The Network Performance Overview: Top 10 dashboard loads. This dashboard displays the metrics values for top 10 resources. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Time Period, the list contains of Last Hour, and Last 24 Hours. Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options.

Chapter 8. Operations 565 Table 89. Widget interactions Area monitored Master widgets Listener widgets Drill-down dashboard

Congestion Top 10 Outbound Total Packet Drops Per QoS Queue Drops Packet Discard Deviation Queue (Packets) (%) Click Find out more Click any bar that details about Packet represents the interface Drops per Queue. Click outbound packet here link to open the discard deviation to QoS Queue Drops refresh the listener dashboard in a new tab. widgets to display the related data for the Top 10 Applications by Applications Response selected interface. Total Delay (ms) Time Click any bar that represent an application in Top 10 Applications by Total Delay (ms), the Application Response Time page opens in a new tab.

Top 10 Inbound Packet N/A N/A Discard Deviation (%)

566 IBM Netcool Operations Insight: Integration Guide Table 89. Widget interactions (continued) Area monitored Master widgets Listener widgets Drill-down dashboard

Quality of Service Top 10 Round-Trip Time N/A Deviation (%) Click any bar that represents the round- trip time between a source and destination device IP addresses to open the IPSLA History dashboard in a new tab.

Top 10 Probe Loss N/A Deviation (%) Click any bar that represents the probe loss between a source and destination device IP addresses to open the IPSLA History dashboard in a new tab.

Top 10 VoIP Inbound VoIP Voice Quality IPSLA History Jitter Deviation (%) The widget displays the or following metrics for the same devices: Top 10 VoIP Outbound Jitter Deviation (%) • MOS Click any bar that • IcPIF represents the VoIP Click Source - Inbound or Outbound Destination: Jitter Deviation between - a source and destination to open device IP addresses to the IPSLA History refresh the listener dashboard in a new tab. widgets. Note: Click the Data Source icon ( ) to toggle between Top 10 VoIP Outbound Jitter Deviation (%) and Top 10 VoIP Inbound Jitter Deviation (%) widgets.

Chapter 8. Operations 567 Table 89. Widget interactions (continued) Area monitored Master widgets Listener widgets Drill-down dashboard

Traffic Top 10 Outbound Top 10 Applications by Utilization Deviation (%) Total Volume (Octets) Click any bar that Top 10 Applications by represents the interface Average Utilization (%) outbound utilization Click Find out more deviation to refresh the details about the listener widgets to Application. Click here display the related data link to open the Top for the selected Applications: Traffic interface. Volume Details for Top Applications Interface dashboard a Top 10 Inbound new tab. Utilization Deviation (%) Click any bar that represents the interface inbound utilization deviation to refresh the listener widgets to display the related data for the selected interface.

Device load Top 10 CPU Load N/A Deviation (%) Click any bar that represents the CPU Load Deviation on a resource, the Device Health History dashboard for the same resource opens in a new tab. Device Health History Top 10 Memory Load N/A Deviation (%) Click any bar that represents the Memory Load Deviation on a resource, the Device Health History dashboard for the same resource opens in a new tab.

Table 90. Available Widgets Widget name Chart type Typical uses

Top 10 Outbound Packet Discard Bar Measures the number of outbound packets that Deviation (%) are discarded even though no errors are detected to free the buffer space. It might be due to resource limitations on the outbound interface.

568 IBM Netcool Operations Insight: Integration Guide Table 90. Available Widgets (continued) Widget name Chart type Typical uses

Total Packet Drops Per Queue Bar Measures the number of packet drops per traffic (Packets) class queue per interface.

Top 10 Inbound Packet Discard Bar Measures the number of inbound packets that are Deviation (%) discarded even though no errors are detected to free the buffer space. It might be due to resource limitations on the inbound interface.

Top 10 Applications by Total Delay Bar Measures maximum total time that it takes an (ms) application to respond to user requests. The total delay is the sum of max client network delay, max server network delay, and application delay.

Top 10 Round-Trip Time Deviation Bar Measures the time that is taken between sending a (%) UDP echo request message from a source device to the destination device and receiving a UDP echo reply from the destination device. It is useful in troubleshooting issues with business-critical applications by determining the round-trip delay times and testing connectivity between devices.

Top 10 Probe Loss Deviation (%) Bar Measures the percentage of probes lost. it shows the completion status of an RTT operation.

Top 10 VoIP Inbound Jitter Bar Measures the variance in latency (jitter) deviation Deviation (%) over time in incoming direction between two devices in your network.

Top 10 VoIP Outbound Jitter (ms) Bar Measures the variance in latency (jitter) deviation over time in outgoing direction between two devices in your network.

Chapter 8. Operations 569 Table 90. Available Widgets (continued) Widget name Chart type Typical uses VoIP Voice Quality Badge Measures voice performance that is important for delivering voice quality services. This widget shows two metrics: • Mean Opinion Scores (MOS). A common benchmark to determine voice quality is MOS. With MOS, a wide range of listeners can judge the quality of voice samples on a scale of 1 (bad quality) to 5 (excellent quality). The Cisco IOS implementation of MOS takes RTT delay and packet loss into the MOS calculation. However, jitter is not included. The following colors on the widget denote the severity of degradation of voice quality: – Green, when the MOS value is between 4 - 5 – Amber, when the MOS value is between 2 - 3 – Red, when the MOS value is between 0 - 1 • Impairment/Calculated Planning Impairment Factor (IcPIF) IcPIF values are expressed in a typical range of 5 (low impairment) to 55 (high impairment). Typically, IcPIF values numerically less than 20 are considered adequate. The following colors on the widget denote the severity of degradation of voice quality: – Green, when the IcPIF value is between 0 - 13 – Amber, when the IcPIF value is between 14 - 33 – Red, when the IcPIF value is between 34 - 93

Top 10 Outbound Utilization Bar Measures outbound bandwidth utilization deviation Deviation (%) for the highest Outbound Packet Discards on the interfaces.

Top 10 Applications by Total Bar Measures the corresponding applications with Volume (Octets) highest bandwidth utilization by volume.

Top 10 Inbound Utilization Bar Measures the bandwidth utilization deviation for Deviation (%) incoming traffic for the highest Inbound Packet Discard Deviation on the interfaces.

Top 10 Applications by Average Bar Measures the corresponding applications with Utilization (%) highest bandwidth utilization by percentage.

570 IBM Netcool Operations Insight: Integration Guide Table 90. Available Widgets (continued) Widget name Chart type Typical uses

Top 10 CPU Load Deviation (%) Bar Measure the total CPU utilization deviation in percentage across all vendor devices in your network. Currently, Network Performance Insight supports the following vendors: • Cisco • Huawei • Juniper

Top 10 Memory Load Deviation (%) Bar Measure the total memory usage deviation in percentage across all vendor devices in your network. Currently, Network Performance Insight supports the following vendors: • Cisco • Huawei • Juniper

WiFi Overview dashboard WiFi overview dashboards are instrumental for enterprises in monitoring the health and performance of the WiFi network. This dashboard represents key performance indicators (KPI), in the form of widgets, of a WiFi network to monitor the network. You can navigate further from these widgets to analyze specific diagnostics. The WiFi Overview is based on the following three key components: • Client Count: The number of clients that are connected to any particular WiFi network (SSID) or Access point (AP) informs the network administrator about the amount of load on that SSID or Access point. At any given point in time, if the client count increases, network administrator can take a decision on moving clients between Access Points or SSIDs and reduce the load on network. • Signal Characteristics: WiFi overview dashboards, display signal quality details by using two characteristics as follows. – Received Signal Strength Indicator (RSSI): RSSI is an estimated measure of power of frequency of a signal that a client receives from the access points. It varies with the distance between clients and access points, hence indicates the network administrator to adjust the client-access point locations and maintain the overall network performance. – Signal-to-Noise ratio (SNR): Signal-to-Noise ratio, is the comparison between the power of signal and the presence of noise in the network. This ratio is an indicator of the quality of the signal. The more the noise the less healthy a signal is. If a client is experiencing low signal-to-noise ratio, a network administrator adjusts the placement of AP to avoid physical obstacles or adjust the client count of the network. • Worst Performing channels: The worst performing channels widget in the dashboard, gives the details of the channels that are facing high intensity of interference and noise. It helps to identify the channels that are acting as loopholes in the network, and gives the network administrator a lead to analyze the network to improve the performance. Refer the widget details to understand the fundamentals of WiFi overview dashboards.

Available widgets and their interactivity The drill-down widget interactions can be seen in the following ways:

Chapter 8. Operations 571 • For any widgets with the ( ) drill-down icon, from the chart, click any bar that represents the interface or metric to open the drill-down dashboard page, which contains the associated widgets. The diagram shows the interactions between widgets and their drill-down widgets:

Network Performance Overview 1. Click Home > WiFi Overview. The WiFi Overview dashboard loads. This dashboard displays the metrics values for KPIs. 2. You can filter data based on the following filters: • Controller, contains the IP address of a controller that manages the Access Points of the network. • Time Period, the list contains of Last Hour, Last 6 Hours, Last 12 Hours, Last 24 Hours, Last 7 Days, Last 30 Days, Last 365 Days and Custom.

Table 91. Widget interactions Master widgets Drill-down dashboard Total APs N/A Total SSIDs N/A 2.4 GHz Clients WiFi Client Count 5 GHz Clients Clients by RSSI Quality: Poor WiFi Client Count Clients by RSSI Quality: Average Clients by RSSI Quality: Good Clients by SNR Quality: Poor WiFi Client Count Clients by SNR Quality: Average Clients by SNR Quality: Good WiFi Network Details N/A

572 IBM Netcool Operations Insight: Integration Guide Table 91. Widget interactions (continued) Master widgets Drill-down dashboard Access Points Details N/A 2.4 GHz: Worst 10 Performing Radio By Channel N/A Utilization (%) 5 GHz: Worst 10 Performing Radio By Channel N/A Utilization (%) Worst 10 Performing Channel By Interference WiFi Interference and Noise Performance

Table 92. Available Widgets Widget name Chart type Typical uses Total APs Badge Displays the number of Access Points that are connected to the selected controller. Total SSIDs Badge Displays the number of SSID (WiFi Network) configured to the selected controller. 2.4 GHz Clients Badge Displays the client count to 2.4 GHz frequency band of signal. 5 GHz Clients Badge Displays the client count to 5 GHz frequency band of signal. Clients by RSSI Quality: Poor Badge Displays the number of clients who are receiving a poor signal strength. The standard RSSI range is less than or equal to -70 dBm. Clients by RSSI Quality: Average Badge Displays the number of clients who are receiving an average signal strength. The standard RSSI range is -46 to -69 dBm Clients by RSSI Quality: Good Badge Displays the number of clients who are receiving a good signal strength. The standard RSSI range is greater than or equal to -45 dBm. Clients by SNR Quality: Poor Badge Displays the number of clients experiencing the lowest signal- to-noise ratio. The standard SNR range is 0 to 12 dBm. If more number of clients are in the Poor range, network performance is hampered. Clients by SNR Quality: Average Badge Displays the number of clients experiencing moderate level of signal-to-noise ratio. The standard SNR range is 13 to 19 dBm.

Chapter 8. Operations 573 Table 92. Available Widgets (continued) Widget name Chart type Typical uses Clients by SNR Quality: Good Badge Displays the number of clients experiencing the highest signal- to-noise ratio. The standard SNR range is equal or greater than 20 dBm. The more number of clients in this range the better for network performance. Wi-Fi Network Details Grid Displays the list of attributes of SSID. 1. Status of SSID as Enabled (1) or Disabled (0) 2. QoS Profile (0: Bronze, 1: Silver, 2: Gold, 3: Platinum) 3. Number of clients connected to the SSID

Access Points Details Grid Displays the list of attributes of Access Points (AP) 1. Location of AP 2. IP address of AP 3. Status as Enabled (1) or Disabled (2) 4. Number of clients connected to the AP

2.4 GHz: Worst 10 Performing Bar Displays the channels that are Radio By Channel Utilization (%) utilizing most of the bandwidth of 2.4 GHz frequency. 5 GHz: Worst 10 Performing Bar Displays the channels that are Radio By Channel Utilization (%) utilizing most of the bandwidth of 5 GHz frequency. Worst 10 Performing Channel By Bar Displays channels that are Interference experiencing highest intensity of interference.

WiFi Client Count WiFi Client Count Dashboard gives you the number of clients that are attached to 2.4 GHz and 5 GHz frequencies and the client count that is affected by RSSI and SNR quality trend of WiFi signal. This client count is distributed based on a timestamp. A time series representation of the client count helps you to identify a definite pattern of client experience in terms of frequency and quality of signal.

Access WiFi Client Count 1. Click Home > WiFi Overview Clients by RSSI Quality • Click Poor, Average, or Good from the badge widget. Clients by SNR Quality • Click Poor, Average, or Good from the badge widget.

574 IBM Netcool Operations Insight: Integration Guide Client Count widgets You have the options to filter the existing Client Count metrics by using the following filters. 1. From the filter options, choose the Controller and Time Period, and click Apply Filter. The dashboard refreshes the data according to the filter attribute values.

Table 93. Client Count Widgets Widget Name Chart Type Description 2.4GHz:Client Count Trend Timeseries Displays the number of clients that are connected to 2.4 GHz frequency based on timestamp 5GHz:Client Count Trend Timeseries Displays the number of clients that are connected to 5 GHz frequency based on timestamp Client Count by RSSI Quality Timeseries Displays the RSSI quality (signal Trend strength) experienced by a number of clients based on timestamp. The client count is divided into three categories of signal strength: Poor, Average, Good.

Client Count by SNR Quality Timeseries Displays the SNR quality (Signal- Trend to-Noise ratio) experienced by a number of clients based on timestamp. The client count is divided into three categories of signal-to- noise ratio: Poor, Average, Good.

WiFi Interference and Noise Performance A drill-down widget of Top 10 worst performing Channels by Interference, this widget displays the characteristics of a signal. You can identify the interference and noise ratio for channels. The timeseries chart helps notice a pattern of signal health for particular controller and channel.

Access WiFi Interference and Noise Performance 1. Click Home > WiFi Overview Click any bar on the Top 10 Worst Performing Channels by Interference.

WiFi Interference and Noise Performance widgets You have the options to filter the existing WiFi Interference and Noise Performance metrics with the following filters. 1. From the filter options, choose the Controller, AP Radio Channel and Time period and click Apply Filter. The dashboard refreshes the data according to the filter attribute values.

Chapter 8. Operations 575 Table 94. WiFi Interference and Noise Performance widgets Widget Name Chart Type Description Interference score (%) Trend Timeseries Displays the interference score for a selected controller and AP radio channel. For the selected controller, you can change the AP radio channel to identify a channel with less interference score.

Interference Power® (dBm) Trend Timeseries Displays the intensity of interference for the selected channel and AP controller. Interference is measured in decibels (dBm). For the selected controller, you can change the AP radio channel to identify a channel with less interference power.

Noise (dBm) Trend Timeseries Displays the intensity of noise present in the selected channel of the selected controller. For the selected controller, you can change the AP radio channel to identify a channel with less noise in its signal.

IP Links Performance Overview dashboard The fundamental usage of IP Links Performance overview dashboard is for the network administrator to reduce the mean time taken to identify an event in a network, its effects on the network and its root cause to efficiently manage the network performance. The IP links performance is monitored through a heatmap which provides an overall view of outbound one way link performance for speech applications between a source and destination server present at various geographical locations. The performance of IP links is affected by factors such as Latency, Jitter and Packet Loss. These three factors are defined as follow: • Latency: Latency is a measurement of delay of a message from source to destination. It is measured is milliseconds. • Jitter: Jitter is defined as the variation in arrival time of messages from source to destination. In other words, Jitter is a variation in latency. It is measured is milliseconds. • Packet Loss: Packet loss is a percentage of packets lost with respect to the packets sent. This dashboard displays the Latency between the source and destination servers in a particular geographical area. Based on the Latency value, network administrator can drill down to links having highest Latency and can get more information about Jitter and Packet Loss between the servers.

Recommended Average Outbound One Way Latency for Speech Application A new chart type called Heat map is introduced in this dashboard to represent latency. A heat map is a graphical representation of data where the individual values contained in a matrix are represented as colors. The heat map looks as follows:

576 IBM Netcool Operations Insight: Integration Guide The parameters of Heat map are: • Rows: The rows of the map represent Source IP address • Columns: The columns of the map represent Destination IP address. • Values: The color coded values of the map represent latency between the source and destination IP links. Click any value in the heat map to drill down to the Source and Destination Details dashboard. The standard ranges for Latency and the color associated to each range are given at the bottom of the heat map. These ranges are taken from ITU-T G.114 standards for one way latency for speech applications. The standard ranges are as follows:

Table 95. Recommended standard ranges for Latency in speech applications Color Code Description

This color depicts that the Latency between source and destination IP links is between 0-150 ms. The IP links are performing excellent with this range of Latency.

This color depicts that the Latency between source and destination IP links is between 151-237 ms. The IP links are performing very well with this range of Latency.

This color depicts that the Latency between source and destination IP links is between 237-337 ms. The IP links are performing well with this range of Latency.

Chapter 8. Operations 577 Table 95. Recommended standard ranges for Latency in speech applications (continued) Color Code Description

This color depicts that the Latency between source and destination IP links is between 338-500 ms. The IP links are performing fair with this range of Latency.

This color depicts that the Latency between source and destination IP links is between 501-700 ms. The IP links are performing poor with this range of Latency.

This color depicts that the Latency between source and destination IP links is greater than 700 ms. The IP links are performing very poor with this range of Latency.

The values in the heat map which are blank and do not come under any of the color codes depict that there is no data flow and communication between the source and destination IP links.

IP Links Performance Overview 1. Click Home > IP Links Performance Overview This dashboard displays IP Links performance in the form of heat map. 2. You can filter data based on the following filters: • Site, the list contains the locations of different networks containing various source and destination servers. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options.

578 IBM Netcool Operations Insight: Integration Guide Source and Destination Details dashboard This is a drill-down dashboard of IP Links Performance Overview dashboard. In the IP Link Performance Overview dashboard, when you click on the colored boxes of the heat map, it drills down to the Source and Destination Details dashboard. The latency, jitter and packet loss are interdependent in a way that when packet loss is more, user will experience more Latency and in turn the jitter value will get affected, as there will be fluctuations in the latency.

Source and Destination Details 1. Click Home > IP Links Performance Overview Click any colored value on the Recommended Average Outbound One Way Latency for Speech Application. 2. You can filter the data based on following filters: • Source, the list contains IP addresses of Source servers. • Destination, the list contains IP addresses of Destination servers. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days 3. Click Apply filter

Table 96. Available Widgets Widget name Chart type Typical uses Source-Destination Badge Displays the IP addresses of the selected Source and Destination. Maximum Latency Badge Displays the Outbound and Inbound Latency in milliseconds • Outbound (ms) for source and destination IP • Inbound (ms) links. Standard range for Outbound and Inbound Latency: • Green: 0-337 ms • Yellow: 338-700 ms • Red: Greater than 700 ms

Maximum Jitter Badge Displays the Outbound and Inbound Jitter in milliseconds for • Outbound (ms) source and destination IP links. • Inbound (ms) Standard range for Outbound and Inbound Jitter: • Green: 0-30 ms • Yellow: 31-40 ms • Red: Greater than 40 ms

Chapter 8. Operations 579 Table 96. Available Widgets (continued) Widget name Chart type Typical uses Maximum Packet Loss Badge Displays the Outbound and Inbound Packet Loss in • Outbound (%) percentage for source and • Inbound (%) destination IP links. Standard range for Outbound and Inbound Packet Loss: • Green: 0%-1% • Yellow: 1.01%-5% • Red: Greater than 5%

Latency, Jitter and Packet Loss Timeseries Displays the Latency in Trend milliseconds, Jitter in milliseconds and Packet Loss in • Outbound percentage for source and • Inbound destination IP links at a specific timestamps.

Load Balancer dashboards The Load Balancer dashboards monitor network load balancing with F5 BIG IP technology. The Load Balancer dashboards provide an insight into the distribution of network traffic across server resources in multiple geographies. These servers can be on premises or hosted on cloud. These dashboards provide visualizations on the performance and availability of your global applications. Data centers that are spread geographically can slow down user requests and network traffic needs that must be managed instantly and load balance during peak demands and downtime. The number of application connection requests and utilization from users can exceed the capacity of a server that hosts the application. The load balancing mechanism helps to distribute the inbound requests and processing load of responses across a group of servers that run the same application effectively.

Load balancer components Typically, the load balancer services that are based on F5 technology consists of the following components: • Global Traffic Manager (GTM) GTM is also called as BIG-IP DNS that improves the performance and availability of your global applications by sending users to the closest or best-performing physical, virtual, or cloud environment. It also hyperscales and secures your DNS infrastructure from DDoS attacks. BIG-IP’s module built to monitor the availability and performance of global resources and use that information to manage network traffic patterns. • Local Traffic Manager (LTM) BIG-IP’s module that manages and optimizes traffic for network applications and clients. BIG-IP LTM treats all network traffic as stateful connection flows. Even connectionless protocols such as User Datagram Protocol (UDP) and Internet Control Message Protocol (ICMP) are tracked as flows. • Virtual servers A virtual server allows BIG-IP systems to send, receive, process, and relay network traffic. A virtual server is a proxy of the actual server (physical, virtual, or container). Combined with a virtual IP address, which is the application endpoint that is presented to the outside world. • Pools

580 IBM Netcool Operations Insight: Integration Guide A configuration object that groups pool members together to receive and process network traffic that is determined by a specified load balancing algorithm. Collections of similar services available on any number of hosts. • Pool members A pool member is a node and service port to which BIG-IP LTM can load balance the network traffic. Nodes can be members of multiple pools. The pool member includes the definition of the application port and the IP address of the physical or virtual server. Refer to it as the service. Unique load balancing and health monitoring based on the services instead of the host. • Nodes A configuration object represented by the IP address of a device on the network. • Wide IPs The Fully Qualified Domain Name (FQDN) of a service.

Load balancing methods Static load balancing methods do not use any traffic metrics from the node to distribute traffic. Dynamic load balancing methods use traffic metrics from the node to distribute traffic. Health monitors keep a close eye on the health of a resource to deem it available or unavailable. They are independent of load balancing methods. Performance monitors measure the hosts performance and dynamically send traffic to hosts in the pool. They work with corresponding dynamic load balancing methods. Health monitors can be applied at the node level or at the pool level, but performance monitors can be applied at the node level only. For more information about the different load balancing methods and algorithms that can be used, see Understanding F5 Load Balancing Methods .

Load Balancer Overview dashboard With the help of F5 Load Balancer from F5 Networks Inc, network traffic is diverted from servers that are overloaded to the other servers that can handle the load. The F5 load balancer service consists of many components. All the components and the key performance metrics that are collected by them are displayed in the overview dashboard.

Available widgets and their interactivity The Load Balancer Overview dashboard and its widgets:

Chapter 8. Operations 581 The master-listener and drill-down widget interactions can be seen in the following ways:

• For any widgets with the ( ) drill-down icon, from the chart, click any bar that represents the interface or metric to open the drill-down dashboard page, which contains the associated widgets. • For the other widgets that are shown in the diagram that have a master-listener interaction, click any of the values to change the listener widget that displays the related data for the selected value. The diagram shows the master-listener, and drill-down interactions between the available widgets:

Load Balancer Overview 1. Click Home > Load Balancer > Load Balancer Overview. This dashboard displays the F5 load balancer components. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days – Last 365 Days – Custom

582 IBM Netcool Operations Insight: Integration Guide Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options. 3. Click Apply Filter.

Table 97. Widget interactions Master widgets Listener widgets Drill-down widgets Local Traffic Managers N/A LTM Details Global Traffic Managers N/A GTM Details Virtual Servers N/A Virtual Server Details Pools N/A Pool Details Pool Members N/A Pool Member Details Top 5 LTM - Ranked by Current® Top 5 Virtual Servers by Current N/A Connections Connections Top 5 Virtual Servers by Current N/A Virtual Server Details Connections

Table 98. Available widgets Widget name Chart type Description Local Traffic Managers Badge Displays the number of Local Traffic Managers that are available in your load-balancing service. It also displays the state of the LTMs. The value and color code indicate the number of LTMs that are unavailable, partially available, and available. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable

Chapter 8. Operations 583 Table 98. Available widgets (continued) Widget name Chart type Description Virtual Servers Badge Displays the number of Virtual Servers that are available in your load-balancing service. It also displays the state of the Virtual Servers. The value and color code indicate the number of Virtual Servers that are unavailable, partially available, and available. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable

Global Traffic Managers Badge Displays the number of Global Traffic Managers that are available in your load-balancing service. It also displays the state of the GTMs. The value and color code indicate the number of GTMs that are unavailable, partially available, and available. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable

Pools Badge Displays the number of Pools that are available in your load- balancing service. It also displays the state of the Pools. The value and color code indicate the number of Pools that are unavailable, partially available, and available. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable

584 IBM Netcool Operations Insight: Integration Guide Table 98. Available widgets (continued) Widget name Chart type Description Pool Members Badge Displays the number of Pool Members that are available in your load-balancing service. It also displays the state of the Pool Members. The value and color code indicate the number of Pools that are unavailable, partially available, and available. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable

Top 5 LTM - Ranked by Current Grid Displays the top five LTMs by Connections current connections. Current or active connections are the connections that are available to that LTM at a particular time that is based on the selected time period. The metrics that are displayed in the Grid widget are as follows: • LTM • Current Connections • Connections per Second • Inbound Throughput (bps) • Outbound Throughput (bps) • CPU Utilization (%) • Memory Utilization (%)

Top 5 Virtual Servers by Current Line Displays the top five Virtual Connections Servers by current connections for the selected LTM. Current or active connections are the connections that are available to that LTM at a particular time that is based on the selected time period. The Time Zoom feature is available for this widget.

Chapter 8. Operations 585 GTM Details dashboard F5® BIG-IP® Global Traffic Manager™ (GTM) distributes DNS and user application requests based on business policies, data center and cloud service conditions, user location, and application performance. BIG-IP GTM delivers flexible global application management in virtual and cloud environments.

Available widgets and their interactivity The GTM Details and its widgets:

GTM Details 1. Click Home > Load Balancer > GTM Details. This dashboard displays the F5 BIG-IP LTM information in your load-balancing environment. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • Business Hour , the list contains Business Hour, Non-Business Hour, and both as ALL. • Top N, the list contains 10, 20, 50, and ALL to display the top N performers. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days – Last 365 Days – Custom Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options. 3. Click Apply Filter.

586 IBM Netcool Operations Insight: Integration Guide Table 99. Widget interactions Master widgets Listener widgets Drill-down widgets GTM Health Summary GTM Health Details N/A Health Details N/A • CPU Utilization (%) • Memory Utilization (%) • Dropped per Second Trend • Requests per Second Trend • Resolutions per Second Trend • Current Connections Trend • Connections per Second Trend • Inbound Throughput (bps) Trend • Outbound Throughput (bps) Trend

WIDE IPS N/A • Wide IP Health Details

Chapter 8. Operations 587 Table 100. Available widgets Widget name Chart type Description GTM Health Summary Grid Displays the list of Global Traffic Managers that are available in your load-balancing service. This widget displays the metrics to monitor the health of the GTM. Click any GTM from the table to see the details. The following metrics are monitored: • IP Address • Requests per Second • Max Requests per Second • Resolutions per Second • Max Resolutions per Second • Dropped per Second • Max Dropped per Second • Current Connections • Max Current Connections • Connections per Second • Max Connections per Second • Inbound Throughput (bps) • Max Inbound Throughput (bps) • Outbound Throughput (bps) • Max Outbound Throughput (bps)

GTM Health Details Badge Displays the selected GTM identifiers and the status of the GTMs that are available.

588 IBM Netcool Operations Insight: Integration Guide Table 100. Available widgets (continued) Widget name Chart type Description Status Badge It displays the state of the GTM. The state of a GTM is derived from the status of Wide IPs. • If all Wide IPs are available, status is 1, then the color is Green. GTMs are all available. • If Wide IPs are partially available with a mix status of 1,2,3,4 then the color is Orange. GTMs are partially available. • If all Wide IPs are not available with a status 3,4, then the color is Red. GTMs are unavailable. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable

CPU Utilization (%) Gauge CPU availability that is used across all available cores in the selected GTM. Memory Utilization (%) Gauge Memory availability that is used across all available cores in the selected GTM. Dropped per Second Trend Line The number of requests that are dropped per second. Note: The TimeZoom feature is available for all the Line charts in this dashboard.

Requests per Second Trend Line The number of requests that are satisfied per second. Resolutions per Second Trend Line The number of transaction requests that are completed per second. Current Connections Trend Line Current or active connections that are available to that GTM at a particular time that is based on the selected time period. It is a measure of the number of client/ server requests that can be handled. Connections per Second Trend Line Active GTM connections that are available per second.

Chapter 8. Operations 589 Table 100. Available widgets (continued) Widget name Chart type Description Inbound Throughput (bps) Trend Line Measured in bits per second. Typically, it is the amount of data that is sent to and from the GTM to the client. Outbound Throughput (bps) Line Measured in bits per second. Trend Typically, it is the amount of data that is sent to and from the client to the GTM. Wide IP Health Details Grid The main configuration element in a GTM is called a Wide IP or WIP. A Wide IP equates to the common URL that you are load balancing. For example, www.yourcompany.com. A pool or pools are usually attached to a WIP, which contain the IP addresses it’s intelligently resolving. BIG-IP® DNS selects pools based on the order in which they are listed in a wide IP. The Wide IP contains one or more pools, which in turn contain one or more virtual servers. • Status • Wide IP Name • Requests per Second • Max Requests per Second • Resolutions per Second • Max Resolutions per Second • Dropped per Second • Max Dropped per Second

LTM Details dashboard F5® BIG-IP® Local Traffic Manager™ (LTM) is an intelligent application traffic management tool. It monitors and controls the network traffic to make sure that the applications are always available, secure, and fast. BIG-IP LTM includes static and dynamic load balancing to eliminate single points of failure.

Available widgets and their interactivity The LTM Details and its widgets:

590 IBM Netcool Operations Insight: Integration Guide LTM Details 1. Click Home > Load Balancer > LTM Details. This dashboard displays the F5 BIG-IP LTM information in your load-balancing environment. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Top N, the list contains 10, 20, 50, and ALL to display the top N performers. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days – Last 365 Days – Custom Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options. 3. Click Apply Filter.

Chapter 8. Operations 591 Table 101. Widget interactions Master widgets Listener widgets Drill-down widgets LTM Health Summary LTM Health Details N/A Virtual Servers Virtual Servers Details Pools Pool Details Pool Members Pool Member Details CPU Utilization (%) N/A Memory Utilization (%) N/A Top 5 Virtual Servers by Current Virtual Servers Details Connections Current Connections Trend N/A Connections per Second Trend N/A Inbound Throughput (bps) Trend N/A Outbound Throughput (bps) N/A Trend

Table 102. Available widgets Widget name Chart type Description

LTM Health Summary Grid Displays the list of Local Traffic Managers that are available in your load-balancing service. This widget displays the metrics to monitor the health of the LTMs. Click any LTM from the table to see the details. The following metrics are monitored: • LTM • IP Address • Current Connections • Max Current Connections • Connections per Second • Max Connections per Second • Inbound Throughput (bps) • Max Inbound Throughput (bps) • Outbound Throughput (bps) • Max Outbound Throughput (bps) • CPU Utilization (%) • Max CPU Utilization (%) • Memory Utilization (%) • Max Memory Utilization (%)

592 IBM Netcool Operations Insight: Integration Guide Table 102. Available widgets (continued) Widget name Chart type Description

LTM Health Details Badge Status displays the state of the • Status LTMs. The state of a LTM is derived from the status of Virtual • Virtual Servers Servers. • Pools • If all Virtual Servers are • Pool Members available, status is 1, then the color is Green. LTM is available. • If Virtual Servers are partially available with a mixed status of 1,2, 3, 4, then the color is Orange. LTM is partially available. • If all Virtual Servers are not available with a status of 4, then the color is Red. LTM is unavailable. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable Displays the total number of Virtual Servers, Pools, and Pool Members that are associated with a selected LTM. It also displays the state of the Virtual Servers, Pools, and Pool Members for the LTM. The value and color code indicate the number of devices that are unavailable, partially available, and available. Explanation for the color codes is as follows: • Green: Available • Orange: Partially Available • Red: Unavailable

CPU Utilization (%) Gauge CPU availability that is used across all available cores in the selected LTM. Memory Utilization (%) Gauge Memory availability that is used across all available cores in the selected LTM.

Chapter 8. Operations 593 Table 102. Available widgets (continued) Widget name Chart type Description Top 5 Virtual Servers by Current Line Top five Virtual Servers that are Connections available by the current connections in the selected LTM. It drills down to the specific Virtual Servers Details dashboard. Note: The TimeZoom feature is available for all the Line charts in this dashboard.

Current Connections Trend Line Current or active connections that are available to that LTM at a particular time that is based on the selected time period. It is a measure of the number of client/ server requests that can be handled. Connections per Second Trend Line Active LTM connections that are available per second. Inbound Throughput (bps) Trend Line Measured in bits per second. Typically, it is the amount of data that is sent to and from the LTM to the client. Outbound Throughput (bps) Line Measured in bits per second. Trend Typically, it is the amount of data that is sent to and from the client to the LTM.

Related information

Virtual Server Details dashboard A virtual server is one of the most important components of any BIG-IP system. The F5 Virtual Server is a traffic management object on your F5 BIG-IP device. It is the representation of multiple servers to the user as a single server. The F5 Virtual Server is a virtual IP that serves user requests. It transmits the requests to the pool that you configure. It is represented by a virtual IP address and a service, such as :80. The primary purpose of a virtual server is to distribute traffic.

Available widgets and their interactivity The Virtual Server Details and its widgets:

Virtual Server Details 1. Click Home > Load Balancer > Virtual Server Details.

594 IBM Netcool Operations Insight: Integration Guide This dashboard displays the F5 BIG-IP Virtual Servers information in your load-balancing environment. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • LTM, the list contains the available LTM devices in your load-balancing environment. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Top N, the list contains 10, 20, 50, and ALL to display the top N performers. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days – Last 365 Days – Custom Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options. 3. Click Apply Filter.

Table 103. Master widgets Listener widgets Drill-down widgets Virtual Server Health Summary Virtual Server Health Details N/A Current Connections Trend N/A Connections per Second Trend N/A Inbound Throughput (bps) Trend N/A Outbound Throughput (bps) N/A Trend Total Request Trend N/A CPU Utilization (%) Trend N/A

Widget name Chart type Description Virtual Server Health Summary Grid Displays the list of Virtual Servers that are in the load balancer environment and their associated metrics in the columns: • Status Status of the virtual server is represented as follows: – Green: Available

Chapter 8. Operations 595 Widget name Chart type Description

– Orange: Partially Available – Red: Unavailable • Virtual Server • IP Address • Connection Limit • Current Connections • Max Current Connections • Connections per Second • Max Connections per Second • Total Request • Max Total Request • Inbound Throughput (bps) • Max Inbound Throughput (bps) • Outbound Throughput (bps) • Max Outbound Throughput (bps) • CPU Utilization (%) • Max CPU Utilization (%)

Virtual Server Health Details Badge The selected Virtual Server health details. The Virtual Server is selected from Virtual Server Health Summary table. Current Connections Trend Line Current or active connections that are available to that Virtual Server at a particular time that is based on the selected time period. It is a measure of the number of client/server requests that can be handled. It also displays the Connection Limit metric. Note: The TimeZoom feature is available for all the Line charts in this dashboard.

Connections per Second Trend Line Active Virtual Server connections that are available per second. Inbound Throughput (bps) Trend Line Measured in bits per second. Typically, it is the amount of data that is sent to and from the Virtual Server to the client. Outbound Throughput (bps) Line Measured in bits per second. Trend Typically, it is the amount of data that is sent to and from the client to the Virtual Server.

596 IBM Netcool Operations Insight: Integration Guide Widget name Chart type Description Total Request Trend Line Total number of requests that are handled by the Virtual Server at a particular time period. CPU Utilization (%) Trend Line CPU availability is used across all available cores in the selected Virtual Server.

Related information Virtual Servers

Pool Details dashboard A load-balancing pool consists of a set of devices such as web servers that can be logically grouped. These pools can receive and process traffic. Instead of sending client traffic to the destination IP address specified in the client request, Local Traffic Manager sends the request to any of the servers that are members of that pool. It helps to efficiently distribute the load on your server resources.

Available widgets and their interactivity The Pool Details and its widgets:

Pool Details 1. Click Home > Load Balancer > Pool Details. This dashboard displays the F5 BIG-IP Pool information in your load-balancing environment. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • LTM, the list contains the available LTM devices in your load-balancing environment. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Top N, the list contains 10, 20, 50, and ALL to display the top N performers. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days – Last 365 Days – Custom Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping .

Chapter 8. Operations 597 • Refer to the conditions applied when you're using the Site and Business Hour filter options. 3. Click Apply Filter.

Table 104. Widget interactions Master widgets Listener widgets Drill-down widgets Pool Health Summary Pool Health Details N/A Current Connections Trend N/A Connections per Second Trend N/A Inbound Throughput (bps) Trend N/A Outbound Throughput (bps) N/A Trend

Table 105. Available widgets Widget name Chart type Description Pool Health Summary Grid Displays the Pools that are in the load balancer environment and their associated metrics in the columns: • Status Status of the Pool member is represented as follows: – Green: Available – Orange: Partially Available – Red: Unavailable • Pool • Load Balancing Algorithm • Current Connections • Max Current Connections • Connections per Second • Max Connections per Second • Inbound Throughput (bps) • Max Inbound Throughput (bps) • Outbound Throughput (bps) • Max Outbound Throughput (bps)

Pool Health Details Badge The selected pool health details. Pool identifier is selected from the Pool Health Summary table.

598 IBM Netcool Operations Insight: Integration Guide Table 105. Available widgets (continued) Widget name Chart type Description Current Connections Trend Line Current or active connections that are available to that pool at a particular time that is based on the selected time period. It is a measure of the number of client/ server requests that can be handled. Note: The TimeZoom feature is available for all the Line charts in this dashboard.

Connections per Second Trend Line Active pool connections that are available per second. Inbound Throughput (bps) Trend Line Measured in bits per second. Typically, it is the amount of data that is sent to and from the pool to the client. Outbound Throughput (bps) Line Measured in bits per second. Trend Typically, it is the amount of data that is sent to and from the client to the pool.

Related information Pools

Pool Member Details dashboard A pool consists of pool members. A pool member is a logical object that represents a physical node on the network. A pool is assigned to a virtual server. The BIG-IP system directs traffic that comes into the virtual server to a member of that pool. An individual pool member can belong to one or multiple pools, depending on your network traffic configuration.

Available widgets and their interactivity The Pool Member Details and its widgets:

Pool Member Details 1. Click Home > Load Balancer > Pool Member Details. This dashboard displays the F5 BIG-IP Pool Member details information in your load-balancing environment. Typically, it shows the pool members that are configured on a specific device. You can track and monitor the number of pool members, and the virtual server and IP address of the device on which the pool members are configured. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • LTM, the list contains the available LTM devices in your load-balancing environment. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL.

Chapter 8. Operations 599 • Top N, the list contains 10, 20, 50, and ALL to display the top N performers. • Time Period, the list contains the following options: – Last Hour – Last 6 Hours – Last 12 Hours – Last 24 Hours – Last 7 Days – Last 30 Days – Last 365 Days – Custom Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options. 3. Click Apply Filter.

Table 106. Widget interactions Master widgets Listener widgets Drill-down widgets Pool Member Health Summary Pool Member Health Details N/A Current Connections Trend N/A Connections per Second Trend N/A Inbound Throughput (bps) Trend N/A Outbound Throughput (bps) N/A Trend

600 IBM Netcool Operations Insight: Integration Guide Table 107. Available widgets Widget name Chart type Description Pool Member Health Summary Grid Displays the list of all the Pool members that are in the load balancer environment and their associated metrics in the columns: • Status Status of the Pool member is represented as follows: – Green: Available – Orange: Partially Available – Red: Unavailable • Pool Member • IP addresses of the Pool members • Connection Limit • Current Connections • Max Current Connections • Connections per Second • Max Connections per Second • Inbound Throughput (bps) • Max Inbound Throughput (bps) • Outbound Throughput (bps) • Max Outbound Throughput (bps)

Pool Member Health Details Badge The selected pool member health details. IP address of the pool member that is selected from the Pool Member Health Summary table. Current Connections Trend Line Current or active connections that are available to that pool member at a particular time that is based on the selected time period. It is a measure of the number of client/server requests that can be handled. It also displays the Connection Limit metric. Note: The TimeZoom feature is available for all the Line charts in this dashboard.

Connections per Second Trend Line Active connections that are available per second.

Chapter 8. Operations 601 Table 107. Available widgets (continued) Widget name Chart type Description Inbound Throughput (bps) Trend Line Measured in bits per second. Typically, it is the amount of data that is sent to and from the pool member to the client. Outbound Throughput (bps) Line Measured in bits per second. Trend Typically, it is the amount of data that is sent to and from the client to the pool member.

NetFlow dashboards Network flow monitoring is often used to resolve network performance issues and ensure Quality of Service (QoS) for key applications and services. Network flow monitoring gives visibility to an effective network and infrastructure management. Network Performance Insight Dashboards track the flow of applications and key services over all areas of the network, such as devices, servers, and more and offer insights into network bandwidth utilization. Network Performance Insight Dashboards populates the performance metrics based on the collected IP network traffic information as the packet enters or exits an interface of a device. When you select an interface for a device, this resource provides a view of the applications responsible for traffic volume, either inbound or outbound, through the interface over the selected time period. This data provides granular details about network traffic that passes through an interface. Ensure that you've configured your Network Performance Insight correctly before using the NetFlow dashboards.

Drill-down dashboard views The diagram shows the flow dashboard groups and the available views.

602 IBM Netcool Operations Insight: Integration Guide Network Traffic Overview dashboard Network traffic overview dashboard provides comprehensive bandwidth analysis and performance metrics monitoring capabilities by monitoring the worst performing interfaces. This helps you to understand further the interface utilization trend and also the IP traffic composition that is related to that interface based on the flow data collected. Use Network Traffic Overview dashboard to monitor network performance details of a particular interface, the traffic trends, and the usage patterns of your network. It identifies the network Top Talkers on applications that use the most network bandwidth.

Available widgets and their interactivity The master-listener and drill-down widget interactions can be seen in the following ways:

• For any widgets with the ( ) drill-down icon, from the chart, click any bar that represents the interface or metric to open the drill-down dashboard page, which contains the associated widgets. • For the other widgets that are shown in the diagram to be in a master-listener or drill-down interaction, click any of the bars that represent the interface to change the listener widget that displays the related data for the selected interface. The diagram shows the master-listener, and drill-down interactions between the available widgets:

Chapter 8. Operations 603 Network Traffic Overview 1. Click NetFlow > Network Traffic Overview. The Network Traffic Overview: Top 10 Traffic dashboard loads. This dashboard provides a view of the top 10 traffic details that are ranked in order by traffic volume. 2. You can filter data based on the following filters: • Site, the list contains the locations to view the traffic levels. • Business Hour , the list contains of Business Hour, Non-Business Hour, and both as ALL. • Time Period, the list contains of Last Hour, and Last 24 Hours. Note: • You can categorize your enterprise network based on different geographical areas and sites. You can configure the sites by specifying the IP address ranges and the time period for business hours and non-business hours. To configure these properties, see Configuring site grouping . • Refer to the conditions applied when you're using the Site and Business Hour filter options.

604 IBM Netcool Operations Insight: Integration Guide Table 108. Widget interactions Master widgets Listener widgets Drill-down dashboard

Top 10 Inbound Utilization (%) The listener widgets from the N/A Interface Utilization and or Deviation Performance over Time Top 10 Outbound Utilization (%) pane are: Click any bar that represents the • Utilization Deviation interface inbound or outbound • Utilization utilization to refresh the listener widgets to display the related • Traffic Volume (Octets) data for the selected interface. • Interface Speed (bps) • 4-Weeks Utilization Trend • Utilization Trend

The listener widgets from the IP • Top Applications Traffic Composition for Selected • Top Sources with Application Interface pane are: • Top Protocols • Top 10 Applications by Traffic • Top Protocols with Source Volume (Octets). Click • Top ToS Find out more details about • Top Conversation top users with application. • Top Sources Click here to open the Top • Top Destinations Sources with Application dashboard. For more information, see “NetFlow dashboards” on page • Top 10 Protocols by Traffic 602. Volume (Octets). Click Find out more details about top users with protocol. Click here to open the Top Protocols with Source dashboard. • Top 10 ToS by Traffic Volume (Octets) • Top 10 Conversations by Traffic Volume (Octets) • Top 10 Sources by Traffic Volume (Octets) • Top 10 Destinations by Traffic Volume (Octets) Click the listener widgets title bar to drill down to the NetFlow dashboards for a more detailed information on the flow data.

Chapter 8. Operations 605 Table 109. Available Widgets Widget name Chart type Typical uses

Top 10 Inbound Utilization (%) Bar Display the top 10 worst performing inbound or Top 10 Outbound Utilization (%) Bar outbound interfaces, which are based on the interface utilization metric that is calculated from the flow data collected. The data is displayed based on the selected time period.

Utilization Deviation Badge Display the interface utilization deviation percentage. The utilization deviation metric is calculated by comparing the interface utilization values of the previous four weeks over the same time period.

Utilization Badge Display the interface utilization percentage.

Traffic Volume (Octets) Badge Display the traffic volume in octets for the selected interface.

Interface Speed (bps) Badge Display the interface speed in bits per second (bps).

4-Weeks Utilization Trend Bar Displays the interface utilization trend of the selected interface. Note: The deviation is calculated based on the 30- minutes aggregation table.

Utilization Trend Timeseries Displays the time-series utilization performance of the selected interface and time period.

Top 10 Applications by Traffic Donut Displays the top 10 applications, which consumed Volume (Octets) the most network bandwidth for the selected interface.

Top 10 Protocols by Traffic Volume Donut Displays the top 10 protocols, which consumed the (Octets) most network bandwidth for the selected interface.

Top 10 ToS by Traffic Volume Donut Displays the top 10 Type of Services (ToS) which (Octets) consumed the most network bandwidth for the selected interface.

Top 10 Conversations by Traffic Donut Displays the top 10 conversation (the source - Volume (Octets) destination IP addresses) which consumed the most network bandwidth for the selected interface.

Top 10 Sources by Traffic Volume Donut Displays the top 10 sources IP, which consumed (Octets) the most network bandwidth for the selected interface.

606 IBM Netcool Operations Insight: Integration Guide Table 109. Available Widgets (continued) Widget name Chart type Typical uses

Top 10 Destinations by Traffic Donut Displays the top 10 destinations IP, which Volume (Octets) consumed the most network bandwidth for the selected interface.

Network Traffic Overview dashboard for Interim Fix2 The enhanced version of Network Traffic Overview consists of visually advanced Sankey diagrams which display both Traffic flow and volume. Sankey diagrams are a specific type of flow diagram, in which the width of the arrows is shown proportionally to the flow quantity. The six widgets of IP Traffic composition for selected interface are replaced with two new widgets which are represented by Sankey diagrams. The new widgets are as follows: • Top 10 Sources with Applications and Destinations • Top 10 Destinations with Applications and Sources These two widgets show the traffic flow and traffic volume between Source and Destination servers and applications.

Available Widgets and their interactions

Table 110. Widget interactions Master widgets Listener widgets Drill-down dashboard

Top 10 Inbound Utilization (%) The listener widgets from the N/A Interface Utilization and or Deviation Performance over Time Top 10 Outbound Utilization (%) pane are: Click any bar that represents the • Utilization Deviation interface inbound or outbound • Utilization utilization to refresh the listener widgets to display the related • Traffic Volume (Octets) data for the selected interface. • Interface Speed (bps) • 4-Weeks Utilization Trend • Utilization Trend

Table 111. Available Widgets Widget name Chart type Typical uses

Top 10 Inbound Utilization (%) Bar Display the top 10 worst performing inbound or Top 10 Outbound Utilization (%) Bar outbound interfaces, which are based on the interface utilization metric that is calculated from the flow data collected. The data is displayed based on the selected time period.

Utilization Deviation Badge Display the interface utilization deviation percentage. The utilization deviation metric is calculated by comparing the interface utilization values of the previous four weeks over the same time period.

Chapter 8. Operations 607 Table 111. Available Widgets (continued) Widget name Chart type Typical uses

Utilization Badge Display the interface utilization percentage.

Traffic Volume (Octets) Badge Display the traffic volume in octets for the selected interface.

Interface Speed (bps) Badge Display the interface speed in bits per second (bps).

4-Weeks Utilization Trend Bar Displays the interface utilization trend of the selected interface. Note: The deviation is calculated based on the 30- minutes aggregation table.

Utilization Trend Timeseries Displays the time-series utilization performance of the selected interface and time period.

Top 10 Sources with Applications Sankey Displays the top 10 sources IP, which consumed and Destination the most network bandwidth when interacting with Applications. The bandwidth consumed by Applications when interacting with Destinations is also displayed. The thicker the width of the arrows of Sankey diagram the more is the bandwidth.

Top 10 Destinations with Sankey Displays the top 10 destinations IP, which Applications and Sources consumed the most network bandwidth when interacting with Applications. The bandwidth consumed by Applications when interacting with Sources is also displayed. The thicker the width of the arrows of Sankey diagram the more is the bandwidth.

Applications Response Overview dashboard Applications Response Overview dashboard displays the overview of the worst performing application- target server pairings based on the Applications Response Time metrics that are monitored.

Available widgets and their interactivity The master-listener and drill-down widget interactions can be seen in the following ways:

• For any widgets with the ( ) drill-down icon, from the chart, click any bar that represents the interface or metric to open the drill-down dashboard page, which contains the associated widgets. • For the other widgets that are shown in the diagram to be in a master-listener or drill-down interaction, click any of the bars that represent the interface to change the listener widget that displays the related data for the selected interface. The diagram shows the master-listener, and drill-down interactions between the available widgets:

608 IBM Netcool Operations Insight: Integration Guide Applications Response Overview 1. Click NetFlow > Applications Response Overview. The Applications Response Overview: Top 10 dashboard loads. This dashboard provides a view of the top 10 worst performing application-target server pairings based on the Applications Response Time metrics that are monitored. 2. From the filter options, choose the Device, Interface, and Time period and click Apply Filter. Note: You can select the filter values or time ranges that you want to display in the dashboard to which the filter is assigned. The dashboard refreshes the data according to the filter attribute values.

Table 112. Widget interactions Master widgets Listener widgets Drill-down dashboard

Top 10 Applications by Client Network Delay (ms) Click any bar that represents the application to drill down to Applications Response Time page.

Top 10 Applications by Server Network Delay (ms) Click any bar that represents the application to drill down to Applications Response Time Applications Response Time page. You can drill down to the N/A Applications Response Time Top 10 Applications by page from these widgets to view Application Delay (ms) applications response time issues in a network. Click any bar that represents the application to drill down to Applications Response Time page.

Top 10 Applications by Total Delay (ms) Click any bar that represents the application to drill down to Applications Response Time page.

Table 113. Available Widgets Widget name Chart type Typical uses

Top 10 Applications by Client Bar Display the top 10 applications delay time of a Network Delay (ms) client based on the Applications Response Time metrics that are monitored. The application client delay is shown in milliseconds based on the selected filter attribute values.

Chapter 8. Operations 609 Table 113. Available Widgets (continued) Widget name Chart type Typical uses

Top 10 Applications by Server Bar Display the top 10 applications delay time of a Network Delay (ms) server based on the Applications Response Time metrics that are monitored. The application server delay is shown in milliseconds based on the selected filter attribute values.

Top 10 Applications by Application Bar Display the top 10 applications delay time of an Delay (ms) application based on the Applications Response Time metrics that are monitored. The application response delay time is shown in milliseconds based on the selected filter attribute values.

Top 10 Applications by Total Delay Bar Display the top 10 applications total delay time (ms) based on the Applications Response Time metrics that are monitored. The total application delay time is shown in milliseconds based on the selected filter attribute values.

Applications dashboards This topic gives you an overview of the Applications dashboards usage. The Applications dashboards is divided into two parts. To access the Applications dashboards, click NetFlow > Applications and select any one of the dashboards: • Top Applications • Top Applications with ToS

Top Applications This dashboard provides a view of the top 10 applications responsible for monitored traffic on your network, ranked in order of traffic volume for an interface.

Table 114. Available Widgets Widget Name Chart Type Description

Top 10 Applications Donut Displays the top 10 applications name with its total traffic volume in octets. The percentage of the application is based on the selected application that is shown by the widget. The individual application in the legend adds up to 100%. This percentage can be absolute or relative.

610 IBM Netcool Operations Insight: Integration Guide Table 114. Available Widgets (continued) Widget Name Chart Type Description

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the application uses the interface bandwidth in octets, bps, or percentage.

Top 10 Applications Grid Displays the Top 10 traffic data (in octets and packets) of device applications through the selected interface. The columns that are displayed depend on the flow direction set, either Inbound or Outbound for the selected time period.

Top Applications with ToS This dashboard provides a top 10 view of network traffic segmented by Type of Service (ToS) method for an interface. Note: Dashboard filters allow you to choose different views of data to be displayed on the active dashboard tab. You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Table 115. Available Widgets Widget Name Chart Type Description

Top 10 Applications with ToS Donut Displays the top 10 applications name by Type of Service (ToS) method for an interface. The percentage of the Applications with ToS is based on the selected application with ToS that is shown by the widget. The individual application in the legend adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time. You can choose to display the Utilization trend in percentage or the Throughput Trend in bit per second (bps). It describes how the application uses the interface bandwidth in octets, bps, or percentage.

Chapter 8. Operations 611 Table 115. Available Widgets (continued) Widget Name Chart Type Description

Top 10 Applications with ToS Grid Displays the Top 10 traffic data (in octets and packets) of device applications with ToS through the selected interface. The columns that are displayed depend on the flow direction set, either Inbound or Outbound for the selected time period.

Conversations dashboards This topic gives you an overview of the Conversations dashboards usage. To access the Conversations dashboards, click NetFlow > Conversations and select any one of the dashboards: • Top Conversations • Top Conversations with ToS • Top Conversations with Application

Top Conversations This dashboard provides a view of the top 10 most bandwidth consuming conversations, which are conducted over your monitored network. Conversations are listed with the amount of data that is transferred in the conversation, in both octets and packets.

Table 116. Available Widgets Widget Name Chart Type Typical uses

Top 10 Conversations Donut Displays the top 10 conversations name with its total traffic volume in octets. The percentage of the conversations is based on the selected conversation that is shown by the widget. The individual conversation in the legend adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput Trend in bit per second (bps). It describes how the conversation traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Conversations Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected conversations through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

612 IBM Netcool Operations Insight: Integration Guide Top Conversations with ToS This dashboard provides a view of the top 10 most bandwidth consuming conversations with ToS conducted over your monitored network. Conversations with ToS are listed with the amount of data that is transferred in the conversation, in both octets and packets.

Table 117. Available Widgets Widget Name Chart Type Description

Top 10 Conversations with ToS Donut Displays the top 10 conversations with ToS name with its total traffic volume in octets. The percentage of the conversations with ToS is based on the selection on the widget. The individual conversation in the legend adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the conversation traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Conversations with ToS Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected conversations with ToS through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Conversations with Application This dashboard provides a view of the top 10 most bandwidth consuming conversations with applications that are conducted over your monitored network. Conversations with application are listed with the amount of data that is transferred in the conversation, in both octets and packets. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Chapter 8. Operations 613 Table 118. Available Widgets Widget Name Chart Type Description

Top 10 Conversations with Donut Displays the top 10 conversations with Application application name with its total traffic volume in octets. The percentage of the conversations with application is based on the selection on the widget. The individual conversation with application in the legend adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the conversation traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Conversations with Grid Displays the Top 10 traffic data (in octets and Application packets) that flows in the selected conversations with application through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Sources dashboards This topic gives you an overview of the Sources dashboards usage. To access the Sources dashboards, click NetFlow > Sources and select any one of the dashboards: • Top Sources • Top Sources with Application

Top Sources This dashboard provides a view of the top 10 sources of traffic used most for traffic on your monitored network. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

614 IBM Netcool Operations Insight: Integration Guide Table 119. Available Widgets Widget Name Chart Type Description

Top 10 Sources Donut Displays the top 10 sources name with its total traffic volume in octets. The percentage of the sources is based on the selected sources that are shown by the widget. The individual source adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the sources traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Sources Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected sources through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Sources with Application This dashboard provides a view of the top 10 sources of traffic with application used most for traffic on your monitored network.

Table 120. Available Widgets Widget Name Chart Type Description

Top 10 Sources with Application Donut Displays the top 10 sources with application name with its total traffic volume in octets. The percentage of the sources with applications is based on the selection on the widget. The individual sources with application adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the sources with applications traffic utilize the interface bandwidth in octets, bps, or percentage.

Chapter 8. Operations 615 Table 120. Available Widgets (continued) Widget Name Chart Type Description

Top 10 Sources with Applications Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected sources with applications through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Protocols dashboards This topic gives you an overview of the Protocols dashboards usage. To access the Protocols dashboards, click NetFlow > Protocols and select any one of the dashboards: • Top Protocols • Top Protocols with Application • Top Protocols with Source • Top Protocols with Destination • Top Protocols with Conversation

Top Protocols This dashboard provides a view of the top 10 of the protocols used most for traffic on your monitored network.

Table 121. Available Widgets Widget Name Chart Type Description

Top 10 Protocols Donut Displays the top 10 protocols name with its total traffic volume in octets. The percentage of the protocols is based on the selected destinations that are shown by the widget. The individual protocol adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the protocols traffic uses the interface bandwidth in octets, bps, or percentage.

616 IBM Netcool Operations Insight: Integration Guide Table 121. Available Widgets (continued) Widget Name Chart Type Description

Top 10 Protocols Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected protocols through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Protocols with Application This dashboard provides a view of the top 10 of the protocols with application used most for traffic on your monitored network.

Table 122. Available Widgets Widget Name Chart Type Description

Top 10 Protocols with Application Donut Displays the top 10 protocols with application name and its total traffic volume in octets. The percentage of the protocols with application is based on the selection on the widget. The individual protocol with application adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the protocols with application traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Protocols with Application Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected protocols with application through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Protocols with Source This dashboard provides a view of the top 10 of the protocols with source used most for traffic on your monitored network.

Chapter 8. Operations 617 Table 123. Available Widgets Widget Name Chart Type Description

Top 10 Protocols with Source Donut Displays the top 10 protocols with source name and its total traffic volume in octets. The percentage of the protocols with source is based on the selection on the widget. The individual protocol with source adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the protocols with source traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Protocols with Source Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected protocols with source through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Protocols with Destination This dashboard provides a view of the top 10 of the protocols with destination used most for traffic on your monitored network.

Table 124. Available Widgets Widget Name Chart Type Description

Top 10 Protocols with Destination Donut Displays the top 10 protocols with destination name and its total traffic volume in octets. The percentage of the protocols with destinations is based on the selection on the widget. The individual protocol with destination adds up to 100%. This percentage can be absolute or relative.

618 IBM Netcool Operations Insight: Integration Guide Table 124. Available Widgets (continued) Widget Name Chart Type Description

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the protocols with destination traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Protocols with Destination Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected protocols with destination through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Protocols with Conversation This dashboard provides a view of the top 10 of the protocols with conversation used most for traffic on your monitored network. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Table 125. Available Widgets Widget Name Chart Type Description

Top 10 Protocols with Donut Displays the top 10 protocols with conversation Conversation name and its total traffic volume in octets. The percentage of the protocols with conversation is based on the selection on the widget. The individual protocol with conversation adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the protocols with conversations traffic utilize the interface bandwidth in octets, bps, or percentage.

Chapter 8. Operations 619 Table 125. Available Widgets (continued) Widget Name Chart Type Description

Top 10 Protocols with Grid Displays the Top 10 traffic data (in octets and Conversation packets) that flows in the selected protocols with conversation through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Destinations dashboards This topic gives you an overview of the Destinations dashboards usage. To access the Destinations dashboards, click NetFlow > Destinations and select any one of the dashboards: • Top Destinations • Top Destinations with Application

Top Destinations This dashboard provides a view of the top 10 domains that serve as destinations of traffic on the network, ranked by percentage of the total traffic over the specified time period.

Table 126. Available Widgets Widget Name Chart Type Description

Top 10 Destinations Donut Displays the top 10 destinations name with its total traffic volume in octets. The percentage of the destinations is based on the selected destinations that are shown by the widget. The individual destination adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the destinations traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Destinations Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected destinations through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

620 IBM Netcool Operations Insight: Integration Guide Top Destinations with Application This dashboard provides a view of the top 10 destinations with application responsible for monitored traffic on your network, ranked in order of traffic volume. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Table 127. Available Widgets Widget Name Chart Type Description

Top 10 Destinations with Donut Displays the top 10 destinations with Application application name with its total traffic volume in octets. The percentage of the destinations with application is based on the selection on the widget. The individual destination with application adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the destinations traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Destinations with Grid Displays the Top 10 traffic data (in octets and Application packets) that flows in the selected destination with application through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

ToS dashboards This topic gives you an overview of the ToS dashboard usage.

Top ToS 1. Click NetFlow > ToS > Top ToS. The Top ToS: Traffic Volume Details for Interface dashboard loads. This dashboard provides a view of the top 10 most bandwidth consuming Type of Service (ToS) for an interface. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Chapter 8. Operations 621 Table 128. Available Widgets Widget Name Chart Type Description

Top 10 ToS Donut Displays the top 10 ToS name with its total traffic volume in octets. The percentage of the ToS is based on the selected ToS shown by the widget. The individual source adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the ToS traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 ToS Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected ToS through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Autonomous Systems dashboards This topic gives you an overview of the Autonomous Systems dashboards usage. To access the Autonomous Systems dashboards, click NetFlow > Autonomous Systems dashboards and select any one of the dashboards: • Top Autonomous System Conversations • Top Source Autonomous Systems • Top Source Autonomous Systems

Top Autonomous System Conversations This dashboard provides a view of the top 10 list of Autonomous System conversations with highest bandwidth consumption. Autonomous System Conversations are listed with the amount of data that is transferred, in both octets and packets, and the percentage of traffic utilization generated by the autonomous system over the specified time period.

622 IBM Netcool Operations Insight: Integration Guide Table 129. Available Widgets Widget Name Chart Type Description

Top 10 Autonomous System Donut Displays the top 10 Autonomous System Conversations conversations name with its total traffic volume in octets. The percentage of the conversation is based on the selected conversation that is shown by the widget. The individual conversation in the legend adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput Trend in bit per second (bps). It describes how the conversation of Autonomous Systems traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Autonomous System Grid Displays the Top 10 traffic data (in octets and Conversations packets) that flows in the selected autonomous systems conversation through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Source Autonomous Systems This dashboard provides a view of the top 10 list of source Autonomous Systems with highest bandwidth consumption. Source Autonomous Systems are listed with the amount of data that is transferred, in both octets and packets, and the percentage of traffic utilization generated by the autonomous system over the specified time period.

Table 130. Available Widgets Widget Name Chart Type Description

Top 10 Source Autonomous Donut Displays the top 10 source Autonomous Systems Systems name with its total traffic volume in octets. The percentage of the source is based on the selected source that is shown by the widget. The individual application in the legend adds up to 100%. This percentage can be absolute or relative.

Chapter 8. Operations 623 Table 130. Available Widgets (continued) Widget Name Chart Type Description

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time. You can choose to display the Utilization trend in percentage or the Throughput Trend in bit per second (bps). It describes how the conversation of Autonomous Systems traffic uses the interface bandwidth in octets, bps, or percentage.

Top 10 Source Autonomous Grid Displays the Top 10 traffic data (in octets and Systems packets) that flows in the selected autonomous systems sources through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Destination Autonomous Systems This dashboard provides a view of the top 10 list of destination Autonomous Systems with highest bandwidth consumption. Destination Autonomous Systems are listed with the amount of data that is transferred, in both octets and packets, and the percentage of traffic utilization generated by the autonomous system over the specified time period. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Table 131. Available Widgets Widget Name Chart Type Description

Top 10 Destination Autonomous Donut Displays the top 10 destination Autonomous Systems Systems name with its total traffic volume in octets. The percentage of the destination is based on the selected destination that is shown by the widget. The individual destination in the legend adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the destination of Autonomous Systems traffic uses the interface bandwidth in octets, bps, or percentage.

624 IBM Netcool Operations Insight: Integration Guide Table 131. Available Widgets (continued) Widget Name Chart Type Description

Top 10 Destination Autonomous Grid Displays the Top 10 traffic data (in octets and Systems packets) that flows in the selected autonomous systems destination through the selected interface of a device. Displays the Top 10 traffic data (in octets and packets) of device destination of Autonomous Systems through the selected interface. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

QoS Queue dashboards This topic gives you an overview of the QoS Queue dashboard usage.

QoS Queue Drops 1. Click NetFlow > QoS Queue > QoS Queue Drops. The QoS Queue Drops: Outbound Traffic Details for Interface dashboard loads. This dashboard provides a view of the top 10 most QoS Queue Drops for traffic on your monitored network for an outbound interface of a device. It provides the visibility of how the defined traffic classes are performing in terms of packet drops. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Table 132. Available Widgets Widget Name Chart Type Description

QoS Queue Drops Donut Displays the top 10 QoS Queue Drops ID with its total drops in packets. The percentage of the QoS Queue Drops is based on the selection on the widget. The individual QoS Queue Drop adds up to 100%. This percentage can be absolute or relative.

Packet Drop Trend Timeseries This timeseries widget displays the Packet Drop trend in packets across a selected time period.

QoS Queue Drops Grid Displays the Top 10 QoS Queue Drops data in packets that flows in the selected queue for an outbound interface of a device.

IP Address Grouping dashboards This topic gives you an overview of the IP Address Grouping dashboards usage. To access the IP Address Grouping dashboards, click NetFlow > IP Address Grouping and select any one of the dashboards: • Top Source IP Groups with Application • Top Source IP Groups with ToS

Chapter 8. Operations 625 • Top Source IP Groups with Protocol • Top Destination IP groups with Application • Top Destination IP Groups with ToS • Top Destination IP Groups with Protocol • Top IP Group Conversations with Application • Top IP Group Conversations with ToS • Top IP Group Conversations with Protocol • Top Source IP Groups • Top Destination IP Groups • Top IP Group Conversations

Top Source IP Groups with Application This dashboard provides a view of the top 10 source hosts that contribute to traffic on the network with applications that are associated with IP address group, which is responsible for the most traffic on your network.

Table 133. Available Widgets Widget Name Chart Type Description

Top 10 Source IP Groups with Donut Displays the top 10 source IP Groups with Application application name and its total traffic volume in octets. The percentage of the source IP Groups with application is based on the selection on the widget. The individual source IP Groups with application adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the source IP Groups with applications traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Source IP Groups with Grid Displays the Top 10 traffic data (in octets and Application packets) that flows in the selected source IP Groups with application through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Source IP Groups with ToS This dashboard provides a view of the top 10 source hosts that contribute to traffic on the network with ToS associated with IP address groups, which is responsible for the most traffic on your network.

626 IBM Netcool Operations Insight: Integration Guide Table 134. Available Widgets Widget Name Chart Type Description

Top 10 Source IP Groups with ToS Donut Displays the top 10 source IP Groups with ToS name and its total traffic volume in octets. The percentage of the source IP Groups with ToS is based on the selection on the widget. The individual source IP Groups with ToS adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the source IP Groups with ToS traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Source IP Groups with ToS Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected source IP Groups with ToS through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Source IP Groups with Protocol This dashboard provides a view of the top 10 source hosts that contribute to traffic on the network with protocols that are associated with IP address group, which is responsible for the most traffic on your network.

Table 135. Available Widgets Widget Name Chart Type Description

Top 10 Source IP Groups with Donut Displays the top 10 source IP Groups with Protocol protocol name and its total traffic volume in octets. The percentage of the source IP Groups with protocol is based on the selection on the widget. The individual source IP Groups with protocol adds up to 100%. This percentage can be absolute or relative.

Chapter 8. Operations 627 Table 135. Available Widgets (continued) Widget Name Chart Type Description

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the source IP Groups with protocols traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Source IP Groups with Grid Displays the Top 10 traffic data (in octets and Protocol packets) that flows in the selected source IP Groups with protocol through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Destination IP Groups with Application This dashboard provides a view of the top 10 domains that serve as destinations of traffic on the network with applications that are associated with an IP address group, which is responsible for the most traffic on your network.

Table 136. Available Widgets Widget Name Chart Type Description

Top 10 Destination IP Groups Donut Displays the top 10 destination IP Groups with with Application application name and its total traffic volume in octets. The percentage of the destination IP Groups with application based on the selection on the widget. The individual destination IP Groups with application adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the destination IP Groups with applications traffic utilize the interface bandwidth in octets, bps, or percentage.

628 IBM Netcool Operations Insight: Integration Guide Table 136. Available Widgets (continued) Widget Name Chart Type Description

Top 10 Destination IP Groups Grid Displays the Top 10 traffic data (in octets and with Application packets) that flows in the selected destination IP Groups with application through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Destination IP Groups with ToS This dashboard provides a view of the top 10 domains that serve as destinations of traffic on the network with ToS that are associated with an IP address group, which is responsible for the most traffic on your network.

Table 137. Available Widgets Widget Name Chart Type Description

Top 10 Destination IP Groups Donut Displays the top 10 destination IP Groups with with ToS ToS name and its total traffic volume in octets. The percentage of the destination IP Groups with ToS is based on the selection on the widget. The individual destination IP Groups with ToS adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the destination IP Groups with ToS traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Destination IP Groups Grid Displays the Top 10 traffic data (in octets and with ToS packets) that flows in the selected destination IP Groups with ToS through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Destination IP Groups with Protocol This dashboard provides a view of the top 10 domains that serve as destinations of traffic on the network with protocol that are associated with an IP address group, which is responsible for the most traffic on your network.

Chapter 8. Operations 629 Table 138. Available Widgets Widget Name Chart Type Description

Top 10 Destination IP Groups Donut Displays the top 10 destination IP Groups with with Protocol protocol name and its total traffic volume in octets. The percentage of the destination IP Groups with protocols based on the selection on the widget. The individual destination IP Groups with protocol adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the destination IP Groups with protocols traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Destination IP Groups Grid Displays the Top 10 traffic data (in octets and with Protocol packets) that flows in the selected destination IP Groups with protocol through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top IP Group Conversations with Application This dashboard provides a view of the top 10 bandwidth consuming conversations with application that is associated with an IP address group, which is responsible for the most traffic on your network.

Table 139. Available Widgets Widget Name Chart Type Description

Top 10 IP Group Conversations Donut Displays the top 10 IP Group conversations with Application with application name and its total traffic volume in octets. The percentage of the IP Group conversations with application is based on the selection on the widget. The individual IP Group conversations with application adds up to 100%. This percentage can be absolute or relative.

630 IBM Netcool Operations Insight: Integration Guide Table 139. Available Widgets (continued) Widget Name Chart Type Description

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the IP Group conversations with application traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 IP Group Conversations Grid Displays the Top 10 traffic data (in octets and with Application packets) that flows in the selected IP Group conversations with application through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top IP Group Conversations with ToS This dashboard provides a view of the top 10 bandwidth consuming conversations with ToS associated with an IP address group, which is responsible for the most traffic on your network.

Table 140. Available Widgets Widget Name Chart Type Description

Top 10 Conversation IP Groups Donut Displays the top 10 conversation IP Groups with ToS with ToS name and its total traffic volume in octets. The percentage of the conversation IP Groups with ToS based on the selection on the widget. The individual conversation IP Groups with ToS adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the conversation IP Groups with ToS traffic utilize the interface bandwidth in octets, bps, or percentage.

Chapter 8. Operations 631 Table 140. Available Widgets (continued) Widget Name Chart Type Description

Top 10 Conversation IP Groups Grid Displays the Top 10 traffic data (in octets and with ToS packets) that flows in the selected conversation IP Groups with ToS through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top IP Group Conversations with Protocol This dashboard provides a view of the top 10 bandwidth consuming conversations with protocols that are associated with an IP address group, which is responsible for the most traffic on your network.

Table 141. Available Widgets Widget Name Chart Type Description

Top 10 Conversation IP Groups Donut Displays the top 10 conversation IP Groups with Protocol with protocol name and its total traffic volume in octets. The percentage of the conversation IP Groups with protocol based on the selection on the widget. The individual conversation IP Groups with protocol adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the conversation IP Groups with protocols traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Conversation IP Groups Grid Displays the Top 10 traffic data (in octets and with Protocol packets) that flows in the selected conversation IP Groups with protocol through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Source IP Groups This dashboard provides a view of the top 10 source hosts that contribute to traffic on the network that are associated with IP address group, which is responsible for the most traffic on your network.

632 IBM Netcool Operations Insight: Integration Guide Table 142. Available Widgets Widget Name Chart Type Description

Top 10 Source IP Groups Donut Displays the top 10 source IP Groups and its total traffic volume in octets. The percentage of the source IP Groups based on the selection on the widget. The individual source IP Groups adds up to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the source IP Groups with applications traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Source IP Groups Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected source IP Groups through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top Destination IP Groups This dashboard provides a view of the top 10 domains that serve as destinations of traffic on the network that are associated with an IP address group, which is responsible for the most traffic on your network.

Table 143. Available Widgets Widget Name Chart Type Description

Top 10 Destination IP Groups Donut Displays the top 10 destination IP Groups and its total traffic volume in octets. The percentage of the destination IP Groups based on the selection on the widget. The individual destination IP Groups adds up to 100%. This percentage can be absolute or relative.

Chapter 8. Operations 633 Table 143. Available Widgets (continued) Widget Name Chart Type Description

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the destination IP Groups with applications traffic utilize the interface bandwidth in octets, bps, or percentage.

Top 10 Destination IP Groups Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected destination IP Groups through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Top IP Group Conversations This dashboard provides a view of the top 10 bandwidth consuming conversations that is associated with an IP address group, which is responsible for the most traffic on your network. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned.

Table 144. Available Widgets Widget Name Chart Type Description

Top 10 IP Group Conversations Donut Displays the top 10 IP Group conversations and its total traffic volume in octets. The percentage of the IP Group conversations is based on the selection on the widget. The individual IP Group conversations add ups to 100%. This percentage can be absolute or relative.

Traffic Volume Trend Timeseries This timeseries widget displays the Traffic Volume trend in octets across a selected time period. You can choose to display the Utilization trend in percentage or the Throughput trend in bit per second (bps). It describes how the IP Group conversations traffic utilize the interface bandwidth in octets, bps, or percentage.

634 IBM Netcool Operations Insight: Integration Guide Table 144. Available Widgets (continued) Widget Name Chart Type Description

Top 10 IP Group Conversations Grid Displays the Top 10 traffic data (in octets and packets) that flows in the selected conversation IP Groups through the selected interface of a device. The columns display depend on the flow direction, either Inbound or Outbound for the selected time period.

Applications Response Time dashboard This topic gives you an overview of the Applications Response Time dashboard usage.

Access Flow Applications Response Time view The Applications Response Time page allows you to view network response time issues for a particular application or category. You can drill down to the Applications Response Time page from the following dashboards: 1. Click Home > Network Performance Overview. Network Performance Overview: Top 10 page loads. • From the Congestion pane, click any bar for an interface from the Top 10 Applications by Total Delay (ms) bar widget. 2. Click Home > Network Performance Overview by Deviation. Network Performance Overview by Deviation: Top 10 Deviation page loads. • From the Congestion pane, click any bar for an interface from the Top 10 Applications by Total Delay (ms) bar widget. 3. Click NetFlow > Applications Response Overview. Applications Response Overview: Top 10 dashboard loads. You can drill down to the Applications Response Time: Response Time for Interface page from these widgets by clicking any of the bars that represent the application to view the network response time issues: • Top 10 Applications by Client Network Delay (ms) • Top 10 Applications by Server Network Delay (ms) • Top 10 Applications by Application Delay (ms) • Top 10 Applications by Total Delay (ms)

Applications Response Time widgets You have the options to filter the existing Applications Response Time metrics using the following filters. 1. From the filter options, choose the Device, Interface, Application, Target, and Time period and click Apply Filter. Note: You can select the filter values or time ranges that you want to display in the dashboard to which the filter is assigned. The dashboard refreshes the data according to the filter attribute values.

Chapter 8. Operations 635 Table 145. Applications Response Time Widgets Widget Name Chart Type Description Applications Response Time Badge It displays the following Applications Response Time metrics in milliseconds. • Max Client Network Delay (ms) • Max Server Network Delay (ms) • Max Application Delay (ms) • Max Total Delay (ms)

Response Time Trend Timeseries It shows the Applications Response Time metrics that display the application response Response Time in milliseconds Grid time delay that is contributed from the client, (ms) server and application server side. The following are the application response time metrics that is shown in milliseconds (ms): • Max Client Network delay • Max Server Network delay • Max Application delay • Max Total delay

Navigate to different NetFlow dashboards This section describes the navigation from an existing NetFlow dashboard to another NetFlow Top 10 dashboard view. You can choose to navigate to a different NetFlow dashboard, by clicking the aggregation type to view options available. For example, 1. Click NetFlow > Applications > Top Applications The Top Applications: Traffic Volume Details for Interface dashboard loads. 2. From the Select aggregation type to view pane, you can click any of the options to view the NetFlow dashboard. The selected NetFlow Top 10 dashboard loads in a new tab.

Aggregation type to view The views are categorized into the following types: • Aggregation • IP Grouping aggregation The table explains which NetFlow dashboard is loaded based on your aggregation type selection.

Table 146. Aggregation type to view Aggregation NetFlow Dashboards

>> Applications Top Applications

>> Applications with ToS Top Applications with ToS

>> Sources Top Sources

636 IBM Netcool Operations Insight: Integration Guide Table 146. Aggregation type to view (continued) Aggregation NetFlow Dashboards

>> Sources with Application Top Sources with Application

>> Destinations Top Destinations

>> Destinations with Application Top Destinations with Application

>> Conversations Top Conversations

>> Conversations with Application Top Conversations with Application

>> Conversations with ToS Top Conversations with ToS

>> Protocols Top Protocols

>> Protocols with Application Top Protocols with Application

>> Protocols with Conversation Top Protocols with Conversation

>> Protocols with Destination Top Protocols with Destination

>> Protocols with Source Top Protocols with Source

>> AS Conversations Top Autonomous System Conversations

>> Destination AS Top Destination Autonomous Systems

>> Source AS Top Source Autonomous Systems

>> ToS Top ToS

Table 147. IP Grouping aggregation type to view IP Grouping aggregation NetFlow Dashboards

>> Source IP Groups Top Source IP Groups

>> Source IP Groups with Application Top Source IP Groups with Application

>> Source IP Groups with ToS Top Source IP Groups with ToS

>> Source IP Groups with Protocol Top Source IP Groups with Protocol

>> Destination IP Groups Top Destination IP Groups

>> Destination IP Groups with Application Top Destination IP Groups with Application

>> Destination IP Groups with ToS Top Destination IP Groups with ToS

>> Destination IP Groups with Protocol Top Destination IP Groups with Protocol

>> IP Group Conversations Top IP Group Conversations

Chapter 8. Operations 637 Table 147. IP Grouping aggregation type to view (continued) IP Grouping aggregation NetFlow Dashboards

>> IP Groups Conversations with Application Top IP Group Conversations with Application

>> IP Groups Conversations with ToS Top IP Group Conversations with ToS

>> IP Groups Conversations with Protocol Top IP Group Conversations with Protocol

On Demand Filtering dashboards You can view the network issues for troubleshooting with On Demand Filtering dashboard, which displays information about your network. On Demand Filtering helps you to investigate in detail the performance of a specific interface over a period for a set of KPIs. You can identify the latency or jitter in your network for further troubleshooting. On Demand Filtering consists of Device Health, Flow, HTTP Operations, IPSLA, and Timeseries Data dashboards. It provides real-time and historical KPI that actively monitors the network status and network quality experience. Probes are one of the most effective ways to gain insights to facilitate root- cause analysis across network interfaces. You can also view the historical trend of loss and latency at the network interface level for troubleshooting network connectivity issues. The Device Health dashboard shows the network health-related KPIs such as CPU, memory, interface traffic, and utilization. IPSLA dashboard supports Cisco, Huawei, and Juniper IPSLA KPIs on jitter and latency quality for sensitive network services while the Flow dashboard displays the total octet for all the different aggregation views. The HTTP Operations dashboard shows the HTTP server response time (RTT). The measurements consist of three types: • DNS lookup, the RTT taken to perform domain name lookup. • TCP Connect, the RTT taken to perform a TCP connection to the HTTP server. • HTTP transaction time, the RTT taken to send a request and get a response from the HTTP server. Timeseries Data dashboard populates the SNMP performance metrics that are available from the timeseries database. Rapid SNMP device onboarding is a new feature to bring new devices such as routers, switches, and servers into a network environment smoothly and seamlessly. You can create your own custom discovery formulas, collection formulas, and metrics and deploy them in the Network Performance Insight system. Network Performance Insight can start discovery and polling from the new devices and their resources with in a day. The collected metrics are stored in the timeseries database and the metrics are populated on the Timeseries Data dashboard. Categories of On Demand Filtering dashboards: • Device Health • Flow • HTTP Operations • IPSLA • Timeseries Data

638 IBM Netcool Operations Insight: Integration Guide Device Health dashboard On Demand filtering Device Health dashboard helps to identify any anomalies from just visualizing the network entities trend charts. It can be further drill down to check the historical data for further troubleshooting.

Device Health 1. Click On Demand Filtering > Device Health. The Device Health dashboard loads with device health monitoring details for the selected KPI over the selected time period. 2. From the filter options, choose the Device, KPI, Sort By, and Time period and click Apply Filter. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned. The dashboard refreshes the data according to the filter attribute values. The Device Health dashboard shows the network health-related KPIs such as CPU, memory, interface traffic, and utilization. You can detect the anomalies from monitoring the spikes, dips, or irregular trend in the performance data from these network-related health entities dynamic charts. Each page displays a maximum of four dynamic charts. Note: You cannot see any plotted data in the dynamic chart when your query result has less than four data points. The dynamic chart populates the data only if the query returns more than four data points.

Table 148. Total network-related health entities dynamic charts Widget name Description Drill-down dashboard

Trend For over the selected time period. History dashboard for the selected KPI and with other Note: denotes either Total drill down to the Device Health Interfaces, CPU, Memory, or History dashboard for the The Device Health History Temperature. selected KPI. dashboard consist of two sets of information: Each of the dynamic chart dynamically plots the total a. A timeseries chart that interfaces, CPU, Memory, or displays the total network Temperature based on the filter related health entities for the attribute values. selected time period. b. Predefined time range of historical data for the selected KPI for: • Last 24 Hours • Last 7 Days • Last 30 Days • Last 365 Days

3. Click the identified data point from the dynamic chart to drill down. The Device Health History dashboard for the selected KPI page loads in a new tab. It displays the detailed Device Health data and the historical data based on the filter attribute values from Device Health On Demand Filtering dashboard.

Chapter 8. Operations 639 Table 149. Available widgets Widget name Chart type Description

Last 24 Hours >> >> hours from the current time of the selected device. For example: Last 24 Hours Inbound Utilization (%) >> The historical data is displayed in accordance 10.55.239.100 >> Fa0/1 to the filter attribute values.

Last 7 Days >> >> days from the current time of the selected device. For example: Last 7 Days Inbound Utilization (%) >> The historical data is displayed in accordance 10.55.239.100 >> Fa0/1 to the filter attribute values.

Last 30 Days >> >> days from the current time of the selected device. For example: Last 30 Days Inbound Utilization (%) >> The historical data is displayed in accordance 10.55.239.100 >> Fa0/1 to the filter attribute values.

Last 365 Days >> >> days from the current time of the selected device. For example: Last 365 Days Inbound Utilization (%) >> The historical data is displayed in accordance 10.55.239.100 >> Fa0/1 to the filter attribute values.

Badge Note: These badges are available for all the widgets in the dashboard. • Min • Avg • Min indicates the minimum value of the KPI within the time period. • Max • Avg indicates the average value of the KPI within the time period. • Max indicates the maximum value of the KPI within the time period.

4. You can select the filter values or time ranges that you want to display in the charts from the Device Health History dashboard filter options. From the filter options, choose the Device, KPI, Resource, and Time period and click Apply Filter. The dashboard refreshes the data according to the filter attribute values.

640 IBM Netcool Operations Insight: Integration Guide IPSLA dashboard On Demand Filtering IPSLA dashboard helps you to identify the probes between the source and destination IP addresses. You can identify the latency or jitter in your network over a period for a set of KPIs for further troubleshooting.

IPSLA 1. Click On Demand Filtering > IPSLA. The IPSLA dashboard loads. IPSLA dashboard can be used to assess the network quality that uses network parameters such as delay, loss, jitter, and more to identify network issues that cause service quality degradation. 2. From the filter options, choose the SLA Test, Source, Sort By, KPI, and Time period and click Apply Filter. Note: • You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned. • The KPI list is populated based on the selected Service Level Agreement (SLA) Test list option. The dashboard refreshes the data according to the filter attribute values. This dashboard provides the IPSLA metric monitoring details for the selected KPI over the selected time period. You can detect the anomalies from monitoring the spikes, dips, or irregular trend in the performance data from the Total probes dynamic charts. Each page displays a maximum of four dynamic charts. Note: You cannot see any plotted data in the dynamic chart when your query result has less than four data points. The dynamic chart populates the data only if the query returns more than four data points.

Table 150. Total Probes dynamic charts Widget name Description Drill-down dashboard

Trend For - It's a dynamic chart that allows It displays the IPSLA History drill down to the IPSLA History dashboard for the selected dashboard for the source IP. source and with other filter attribute values. Each of the dynamic chart dynamically plots the probes The IPSLA History dashboard that help to identify the network consist of two sets of connectivity issues on end-to- information: end between the selected router a. A timeseries chart that and devices by using IP displays the probe counts for addresses. the selected time period. The data is displayed in b. Predefined time range of accordance to the filter attribute historical data for the values. selected KPI for: • Last 24 Hours • Last 7 Days • Last 30 Days • Last 365 Days

3. Click the identified data point from the dynamic chart to drill down. The IPSLA History dashboard for the selected source and destination IP page loads in a new tab. It displays the detailed IPSLA data and the historical data based on the filter attribute values from IPSLA On Demand Filtering dashboard.

Chapter 8. Operations 641 Table 151. Available widgets Widget name Chart type Description

Probe History Data

Last 24 Hours Probe Count Timeseries Displays the probe history data of last 24 (count) >> Source >> hours from the current time of the selected Destination source and destination IP. For example: Last 24 Hours The historical data is displayed in accordance Probe Count (count) >> Source to the filter attribute values. 10.25.238.209 >> Destination 4.10.110.50

Last 7 Days Probe Count (count) Timeseries Display the probe history data of last 7 days >> Source >> Destination from the current time of the selected source and destination IP. For example: Last 7 Days Probe The historical data is displayed in accordance Count (count) >> Source to the filter attribute values. 10.25.238.209 >> Destination 4.10.110.50

Last 30 Days Probe Count Timeseries Display the probe history data of last 30 days (count) >> Source >> from the current time of the selected source Destination and destination IP. For example: Last 30 Days The historical data is displayed in accordance Probe Count (count) >> Source to the filter attribute values. 10.25.238.209 >> Destination 4.10.110.50

Last 365 Days Probe Count Timeseries Display the probe history data of last 365 days (count) >> Source >> from the current time of the selected source Destination and destination IP. For example: Last 365 Days The historical data is displayed in accordance Probe Count (count) >> Source to the filter attribute values. 10.25.238.209 >> Destination 4.10.110.50

642 IBM Netcool Operations Insight: Integration Guide Table 151. Available widgets (continued) Widget name Chart type Description

Badge Note: These badges are available for all the widgets in the dashboard. • Min • Avg • Min indicates the minimum value of the KPI within the time period. • Max • Avg indicates the average value of the KPI within the time period. • Max indicates the maximum value of the KPI within the time period.

4. You can select the filter values or time ranges that you want to display in the charts from the IPSLA History dashboard filter options. From the filter options, choose the SLA Test, Source, Destination, KPI, and Time period and click Apply Filter. The dashboard refreshes the data according to the filter attribute values.

Flow dashboard On demand filtering Flow dashboard helps to identify any anomalies from just visualizing the network interfaces traffic volume trend charts. It can be further drill down to check the historical data for further troubleshooting.

NetFlow 1. Click On Demand Filtering > Flow. The Flow dashboard loads. Flow dashboard displays the total flow interfaces for either inbound or outbound traffic. For example, it displays the total flow interface for the selected device based on the aggregation type and time range. By default, the dashboard displays the total flow interface for Application aggregation type. 2. From the filter options, choose the Device, Direction, Aggregation Type, Top N, and Time period and click Apply Filter. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned. The dashboard refreshes the data according to the filter attribute values. You can detect the anomalies from monitoring the spikes, dips, or irregular trend in the performance data from the Total Flow Interface(s): dynamic charts. Each page displays a maximum of two dynamic charts. This dynamic chart dynamically charts the interfaces. Note: • denotes the number of total flow enabled interfaces. • You will not see any data plotted in the dynamic chart when your query result has less than four data points. The dynamic chart only populates the data if the query returns more than four data points.

Chapter 8. Operations 643 Table 152. Total Flow Interface(s): dynamic charts Widget name Description Drill-down dashboard

Trend For It's a dynamic chart that allows It displays the Flow History drill down to the Flow History dashboard for the selected dashboard for the selected aggregation type view with the aggregation type view. filter attribute values. Each of the dynamic charts The Flow History dashboard shows either inbound or consist of two sets of outbound flow traffic volume information: trend of an interface per a. A timeseries chart that aggregation view for the displays the traffic volume in selected device. octets for the selected time The data is displayed in period. accordance to the filter attribute b. Predefined time range of values. historical data for the selected interface of an aggregation type view for: • Last 24 Hours • Last 7 Days • Last 30 Days • Last 365 Days

3. Click the identified data point from the dynamic chart to drill down. The Flow History dashboard for the selected aggregation type view page loads in a new tab. It displays the detailed flow data and the historical data for the selected aggregation type view with the filter attribute values.

Table 153. Available widgets Widget name Chart type Description

Flow History Data

Last 24 Hours >> Timeseries Displays flow traffic volume in octets for the >> selected interface for the last 24 hours from the current time. For example: Last 24 Hours The historical data is displayed in accordance 10.55.239.219 >> to the filter attribute values. 10.55.239.219-Fa0/1 >> Applications

644 IBM Netcool Operations Insight: Integration Guide Table 153. Available widgets (continued) Widget name Chart type Description

Last 7 Days >> Timeseries Displays flow traffic volume in octets for the >> selected interface for the last 7 days from the current time. For example: Last 7 Days The historical data is displayed in accordance 10.55.239.219 >> to the filter attribute values. 10.55.239.219-Fa0/1 >> Applications

Last 30 Days >> Timeseries Displays flow traffic volume in octets for the >> selected interface for the last 30 days from the current time. For example: Last 30 Days The historical data is displayed in accordance 10.55.239.219 >> to the filter attribute values. 10.55.239.219-Fa0/1 >> Applications

Last 365 Days >> Timeseries Displays flow traffic volume in octets for the >> selected interface for the last 365 days from the current time. For example: Last 365 Days The historical data is displayed in accordance 10.55.239.219 >> to the filter attribute values. 10.55.239.219-Fa0/1 >> Applications

4. You can select the filter values or time ranges that you want to display in the charts from the Flow History dashboard filter options. From the filter options, choose the Device, Interface, Direction, Aggregation Type, Top N, and Time period and click Apply Filter. The dashboard refreshes the data according to the filter attribute values.

HTTP Operations dashboard On demand filtering HTTP Operations dashboard helps to display the round-trip time (RTT) between a Cisco device and an HTTP server to retrieve a web page. It can be further drill down to view the response time trend over a period for a set of KPIs for further troubleshooting.

HTTP Operations 1. Click On Demand Filtering > HTTP Operations. The HTTP Operations dashboard loads with the round-trip time (RTT) to help to analyze how an HTTP server is performing. 2. From the filter options, choose the Source, Sort By, KPI, and Time period and click Apply Filter. Note: • You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned. • The KPI list is populated based on the IP Service Level Agreement (SLA) HTTP Operation. This dashboard provides the HTTP Operations metric monitoring details for the selected KPI over the selected time period. You can detect the anomalies from monitoring the spikes, dips, or irregular trend in the performance data from the Total probes dynamic charts. Each page displays a maximum of four dynamic charts.

Chapter 8. Operations 645 Note: You will not see any data plotted in the dynamic chart when your query result has less than four data points. The dynamic chart only populates the data if the query returns more than four data points.

Table 154. Total Probes dynamic charts Widget name Description Drill-down dashboard

Trend For - Its a dynamic chart that allows It displays the HTTP Response drill down to the HTTP Time: IPSLA HTTP Operations Response Time: IPSLA HTTP page for the selected source Operations page for the source and with other filter attribute IP. values. Each of the dynamic chart The HTTP Response Time: dynamically plot the probes, IPSLA HTTP Operations which helps to identify round- dashboard consist of two sets of trip time (RTT) between a Cisco information: device and an HTTP server to a. A timeseries chart that retrieve a web page. displays the response time The data is displayed in trend for the selected accordance to the filter attribute - . b. A grid chart that displays the response time in milliseconds.

3. Click the identified data point from the dynamic chart to drill down. The HTTP Response Time: IPSLA HTTP Operations for the selected source and destination IP page loads in a new tab.

Table 155. Available widgets Widget name Chart type Description

Response Time Trend Timeseries The widget displays the HTTP response time for the HTTP operation metrics. Response Time in milliseconds Grid By default, the information is shown based on (ms) the filter attribute values from the HTTP Operations On Demand Filtering dashboard.

4. You can select the filter values or time ranges that you want to display in the charts from the HTTP Response Time: IPSLA HTTP Operations page filter options. From the filter options, choose the Source, Destination, and Time period and click Apply Filter. The dashboard refreshes the data according to the filter attribute values.

Timeseries Data dashboard On demand filtering Timeseries Data dashboard helps to display SNMP performance metrics. It also allows you to monitor for new device types and vendors. Timeseries Data dashboard populates the SNMP performance metrics that are available from the timeseries database. It can also be used for Rapid SNMP device onboarding scenario where you can view the new SNMP metrics that are created and deployed in the Network Performance Insight system. Note: To use Timeseries Data dashboard in Rapid SNMP device onboarding scenario, perform the following tasks: • Create the custom formulas and metrics for the newly on-boarded devices.

646 IBM Netcool Operations Insight: Integration Guide • Deploy them in Network Performance Insight system to start using the content to discover and poll the devices and resources.

Timeseries Data 1. Click On Demand Filtering > Timeseries Data. The Timeseries Data dashboard loads. The Timeseries data dashboard displays the collected performance metrics that are stored in the Network Performance Insight Dashboards timeseries database. 2. From the filter options, choose the Device, Resource Type, Resource Name, and Time period and click Apply Filter. Note: You can select the filter values or time ranges that you want to display in dashboards to which the filter is assigned. The List of KPI widget populates the available KPIs for the selected device and resource type. 3. From the List of KPI widget check-boxes, select the required KPIs to be populated on the KPI(s) Trend chart. a. Select the KPIs check boxes. b. Right click the selection and click KPI(s) Trend. The KPI(s) Trend timeseries chart refreshes according to the selected KPI list.

Table 156. Widget interactions Widget name Description Drill-down dashboard List of KPI List the available KPIs for the There's no drill-down from this selected device and resource widget. type. The KPI(s) Trend- <'KPI You can view one or more KPI Name'> timeseries chart trend on the timeseries chart by refreshes according to the selecting the KPI check boxes. selected KPI list. You can set the KPI list filter condition: • Click the define filter icon

( ). The Filter pop-up window loads. • Set the conditions from the Filter pop-up window and click Filter The KPI list refreshes according to the filter conditions set.

Chapter 8. Operations 647 Table 156. Widget interactions (continued) Widget name Description Drill-down dashboard

KPI(s) Trend A dynamic timeseries chart that The Timeseries Data History displays the KPI trend for the page displays the predefined selected KPIs. time range of history data for the selected KPI and resource It's a dynamic chart that allows type of: drill down to the Timeseries Data History page for the KPI and filter • Last 24 Hours attribute values. • Last 7 Days Note: You will not see any data • Last 30 Days plotted in the dynamic chart when • Last 365 Days your query result has less than four data points. The dynamic chart only populates the data if the query returns more than four data points.

4. Click the identified data point from the KPI(s) Trend dynamic timeseries chart to drill down. The Timeseries Data History page loads in a new tab. It displays the selected KPI's data for the predefined time range of historical data.

Table 157. Historical Data widgets Widget name Chart type Description

Last 24 Hours >> Resource Type Timeseries Displays the KPI's timeseries data for the >> Resource Name selected device and resource type for the last 24 hours from the current time. For example: Last 24 Hours >> The historical data is displayed in accordance Resource Type interface >> to the filter attribute values. Resource Name ge-1/0/0

Last 7 Days >> Resource Type Timeseries Displays the KPI's timeseries data for the >> Resource Name selected device and resource type for the last 7 days from the current time. For example: Last 7 Days >> The historical data is displayed in accordance Resource Type interface >> to the filter attribute values. Resource Name ge-1/0/0

Last 30 Days >> Resource Type Timeseries Displays flow traffic volume in octets for the >> Resource Name selected interface for the last 30 days from the current time. For example: Last 30 Days >> The historical data is displayed in accordance Resource Type interface >> to the filter attribute values. Resource Name ge-1/0/0

Last 365 Days >> Resource Type Timeseries Displays the KPI's timeseries data for the >> Resource Name selected device and resource type for the last 365 days from the current time. For example: Last 365 Days >> The historical data is displayed in accordance Resource Type interface >> to the filter attribute values. Resource Name ge-1/0/0

648 IBM Netcool Operations Insight: Integration Guide 5. You can select the filter values or time ranges that you want to display in the charts from the Timeseries Data History dashboard filter options. From the filter options, choose the Device, and Resource Name, and click Apply Filter. The dashboard refreshes the data according to the filter attribute values. Related information Rapid SNMP device onboarding

Chapter 8. Operations 649 650 IBM Netcool Operations Insight: Integration Guide Chapter 9. Troubleshooting Netcool Operations Insight

Consult these troubleshooting notes to help determine the cause of the problem and what to do about it.

Troubleshooting Netcool Operations Insight on Red Hat OpenShift Use this information to support troubleshooting of errors that might occur when deploying or using Netcool Operations Insight on Red Hat OpenShift.

Version information for pods running on OpenShift You need to send version numbers when reporting a support issue to IBM Support L2 or L3. Generate version and image information by running the following command.

for P in $(oc get po|awk '{print $1}'|grep -v NAME); do echo $P; echo; oc describe po $P|egrep "Image|product[NV]"; echo; echo "======"; echo; done | tee /tmp/ NOI_version_info.txt

This produces an output file that can be attached to a support case. The following code snippets provide examples of the content of this file: Example 1:

======noi-operator-7fbfd7dc48-k55hg

productName: IBM Netcool Operations Insight productVersion: 1.6.1 Image: docker.io/ibmcom/noi- operator@sha256:976fe419feb8735379cd5100c0012cc5749b5c98a1f452c89a81fc852e7f9987 Image ID: docker.io/ibmcom/noi- operator@sha256:976fe419feb8735379cd5100c0012cc5749b5c98a1f452c89a81fc852e7f9987

======

Example 2:

======om204-topology-status-669cb6c44-ksw9w productName: IBM Netcool Operations Insight Agile Service Manager productVersion: 1.1.8 Image: hyc-hdm-staging-docker-virtual.artifactory.swg-devops.com/nasm-ubi-base- layer:3.2.0-202006121630-L-PPAN-BNYCMS Image ID: hyc-hdm-staging-docker-virtual.artifactory.swg-devops.com/nasm-ubi-base- layer@sha256:a245018c2cef2712335ea50c3b074767d8ee2476d7bb86940734026daa98d3ff Image: hyc-hdm-staging-docker-virtual.artifactory.swg-devops.com/nasm-status- service:1.0.0.11808-L-PPAN-BNYCMS Image ID: hyc-hdm-staging-docker-virtual.artifactory.swg-devops.com/nasm-status- service@sha256:d2ebac219adc762ab8bb87450e8c6af72c55b1c2268953841fc0b72888e99969 ======

© Copyright IBM Corp. 2020, 2020 651 Troubleshooting cloud-based installations Use the following troubleshooting information to resolve problems with cloud-based installations.

Viewing installation logs View installation logs for information on the success of the installation process.

Viewing Kubernetes logs To view logging information, run the kubectl logs command from the command line.

About this task There are three levels of detail at which you can report on the progress of pod and container installation: Displaying pod status Run the following command to see overall status for each pod.

kubectl get pods

Run the following command if the namespace is non-default.

kubectl get pods --namespace

Where namespace is the name of the non-default namespace. Displaying a log file Run the following command to display log files for a specific pod or container within that pod.

kubectl logs name_of_pod [-c name_of_container]

The following section lists the relevant commands for the different pods and containers. Primary ObjectServer

kubectl logs name_of_objserv-primary-pod

Backup ObjectServer

kubectl logs name_of_objserv-backup-pod -c ncobackup-agg-b

Failover gateway

kubectl logs name_of_objserv-backup-pod -c ncobackup-agg-gate

WebGUI

kubectl logs name_of_webgui-pod -c webgui

Log Analysis

kubectl logs name_of_log-analysis-pod -c unity

XML gateway

kubectl logs name_of_log-analysis-pod -c gateway

Primary Impact Server

kubectl logs name_of_impactcore-primary-pod -c nciserver

Backup Impact Server

kubectl logs name_of_impactcore-backup-pod -c nciserver

652 IBM Netcool Operations Insight: Integration Guide Impact GUI Server

kubectl logs name_of_impactgui-pod -c impactgui

Db2

kubectl logs name_of_db2ese-pod -c db2ese

Proxy

kubectl logs name_of_proxy-pod

OpenLDAP

kubectl logs name_of_openLDAP-pod cloud native analytics

kubectl logs name_of_cassandra-pod

kubectl logs name_of_couchdb-pod

kubectl logs name_of_ea-noi-layer-eanoigateway-pod

kubectl logs name_of_ea-noi-layer-eanoiactionservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-inferenceservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-eventsqueryservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-archivingservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-servicemonitorservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-policyregistryservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-ingestionservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-trainer-pod

kubectl logs name_of_ibm-hdm-analytics-dev-collater-aggregationservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-normalizer-aggregationservice-pod

kubectl logs name_of_ibm-hdm-analytics-dev-dedup-aggregationservice-pod

kubectl logs name_of_spark-master-pod

kubectl logs name_of_spark-slave-pod

kubectl logs name_of_ea-ui-api-graphql-pod

kubectl logs name_of_ibm-hdm-common-ui-uiserver-pod

kubectl logs name_of_kafka-pod

kubectl logs name_of_zookeeper-pod

Chapter 9. Troubleshooting Netcool Operations Insight 653 kubectl logs name_of_redis-sentinel-pod

kubectl logs name_of_redis-server-pod

Following a log file Run the following command to stream a log file for a specific pod or container within that pod.

kubectl logs -f name_of_pod [-c name_of_container]

The following section lists the relevant commands for the different pods and containers. Primary ObjectServer

kubectl logs -f --tail=1 name_of_objserv-primary-pod

Backup ObjectServer

kubectl logs -f --tail=1 name_of_objserv-backup-pod -c ncobackup-agg-b

Failover gateway

kubectl logs -f --tail=1 name_of_objserv-backup-pod -c ncobackup-agg-gate

WebGUI

kubectl logs -f --tail=1 name_of_webgui-pod -c webgui

Log Analysis

kubectl logs -f --tail=1 name_of_log-analysis-pod -c unity

XML gateway

kubectl logs -f --tail=1 name_of_log-analysis-pod -c gateway

Primary Impact Server

kubectl logs -f --tail=1 name_of_impactcore-primary-pod -c nciserver

Backup Impact Server

kubectl logs -f --tail=1 name_of_impactcore-backup-pod -c nciserver

Impact GUI Server

kubectl logs -f --tail=1 name_of_impactgui-pod -c impactgui

Db2

kubectl logs -f --tail=1 name_of_db2ese-pod -c db2ese

Proxy

kubectl logs -f --tail=1 name_of_proxy-pod

OpenLDAP

kubectl logs -f --tail=1 name_of_openLDAP-pod

cloud native analytics

kubectl logs -f --tail=1 name_of_cassandra-pod

kubectl logs -f --tail=1 name_of_couchdb-pod

654 IBM Netcool Operations Insight: Integration Guide kubectl logs -f --tail=1 name_of_ea-noi-layer-eanoigateway-pod

kubectl logs -f --tail=1 name_of_ea-noi-layer-eanoiactionservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-inferenceservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-eventsqueryservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-archivingservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-servicemonitorservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-policyregistryservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-ingestionservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-trainer-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-collater-aggregationservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-normalizer-aggregationservice-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-analytics-dev-dedup-aggregationservice-pod

kubectl logs -f --tail=1 name_of_spark-master-pod

kubectl logs -f --tail=1 name_of_spark-slave-pod

kubectl logs -f --tail=1 name_of_ea-ui-api-graphql-pod

kubectl logs -f --tail=1 name_of_ibm-hdm-common-ui-uiserver-pod

kubectl logs -f --tail=1 name_of_ea-ui-api-graphql-pod

kubectl logs -f --tail=1 name_of_kafka-pod

kubectl logs -f --tail=1 name_of_zookeeper-pod

kubectl logs -f --tail=1 name_of_redis-sentinel-pod

kubectl logs -f --tail=1 name_of_redis-server-pod

Related information Kubernetes documentation: kubectl command referenceThis documentation lists all Kubernetes commands and provides examples.

Errors on command line after installation If you receive the command line response Errors when you run the kubectl get command following a Cloud or Hybrid deployment, then you must run a series of checks to determine if there is an error, and where that error is.

Symptom Following a Cloud or Hybrid deployment, run a command similar to the following:

kubectl get my_deployment noi -o yaml

Chapter 9. Troubleshooting Netcool Operations Insight 655 Where my_deployment is the name of your deployment. If you see the following response fragment on the command line, then you need to investigate further:

status: phase: Error

Investigating the problem Run the following commands to investigate further.

Table 158. Further investigation Item to check Command Ensure all jobs are in the correct oc get jobs --all-namespaces state. Ensure correct version. kubectl describe noi om193 | egrep ^build Check status of important noi, noitransformations, nohybrid parameters; some examples are given in the cell to the right. Check noi_operator pods. kubectl logs noi-operator-566b845789-kzfrc If there are noi_operator pod kubectl describe pod noi-operator-566b845789-kzfrc | egrep startup issues, then get more Events -A 100 details. kubectl get events

Extra checks for OLM (Operator kubectl get catalogsource --all-namespaces Lifecycle Manager) deployments. kubectl get subscription --all-namespaces

Installation hangs with nciserver-0 pod in CreateContainerConfigError state Installation hangs with nciserver-0 pod in CreateContainerConfigError state

Problem The nciserver-0 pod will not start and has a status of CreateContainerConfigError. The log for the nciserver-0 pod shows: Error: secret “noi-cem-cemusers-cred-secret” not found. The ibm-hdm-analytics-dev-setup pod has not fully started, and its logs show failures connecting to Cassandra: Warning BackoffLimitExceeded Job has reached the specified backoff limit.

Resolution Use the following command to restart the ibm-hdm-analytics-dev-setup pod, which will cause it to reattempt to connect to the Cassandra database:

oc delete custom_resource-ibm-hdm-analytics-dev-setup

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install).

ENOTFOUND error How to workaround an ENOTFOUND error.

Problem For any connectivity issues, either during or after a deployment of IBM Netcool Operations Insight on Red Hat OpenShift, there might be an underlying Red Hat OpenShift issue.

656 IBM Netcool Operations Insight: Integration Guide Solution Check the pods running in the openshift-dns namespace and restart any pods that are in trouble.

ImageRepository field is empty The ImageRepository field in the Operations Management UI is not pre-populated.

Problem If you are installing Netcool Operations Insight on Red Hat OpenShift, the ImageRepository field might be empty.

Resolution Use the following command to help you find the value that you need to supply.

oc get route docker-registry

Use the value of host that is returned by this command, with the namespace of your Netcool Operations Insight on Red Hat OpenShift deployment appended as the value for ImageRepository.

Install timeout error How to debug the cause of an installation timeout.

About this task When you attempt an installation of Netcool Operations Insight on Red Hat OpenShift and receive a timeout error, use the following procedure to help you to debug the cause of the error.

helm install . -f ~/values.yaml --tls --name my-noi-release Error: timed out waiting for the condition

Procedure 1. Find the tiller pod name, and search for entries about the failed install in the tiller pod's logs, as in the following example where the failed deployment is called my-noi-release.

kubectl get pod -n kube-system | grep tiller tiller-deploy-5d8494fb8-ggrzl 1/1 Running 1 91d kubectl logs tiller-deploy-5d8494fb8-ggrzl -n kube-system | grep my-noi-release

[tiller] 2019/06/06 09:14:43 warning: Release my-noi-release pre-install ibm-netcool-prod/ templates/preinstallhook.yaml could not complete: timed out waiting for the condition [tiller] 2019/06/06 09:14:43 deleting pre-install hook my-noi-release-verifysecrets for release my-noi-release due to "hook-failed" policy

The pre-install hook job my-noi-release-verifysecrets failed, and was deleted. 2. Attempt the installation again, as in the following example:

helm install . -f ~/values.yaml --tls --name my-noi-release

While this command is running, use another shell to monitor the my-noi-release-verifysecrets job for problems:

kubectl describe job my-noi-release-verifysecrets Warning FailedCreate 8s (x2 over 19s) job-controller Error creating: pods "my-noi- release-verifysecrets-" is forbidden: error looking up service account testnew/noi-service- account: serviceaccount "noi-service-account" not found

The warning message shows that the expected service account, noi-service-account, was not found. 3. To see where this service account is created, search the charts in the Netcool Operations Insight on Red Hat OpenShift deployment directory for ServiceAccount, as in the following example:

Chapter 9. Troubleshooting Netcool Operations Insight 657 find . -type f | xargs grep "^kind: ServiceAccount" ./templates/serviceaccount.yaml:kind: ServiceAccount

and then view that file:

view ./templates/serviceaccount.yaml {{ if .Values.global.rbac.create }} {{- if (not (eq .Values.global.rbac.serviceAccountName "default" )) }} {{- include "sch.config.init" (list . "sch.chart.config.values") -}}

From this chart, it can be seen that the service account is created in ./templates/ serviceaccount.yaml only if Values.global.rbac.create is true. To get the installation to complete successfully, the service account must either be manually created before installation, or Create required RBAC RoleBindings (global.rbac.create in the ibm.netcool values.yaml file) must be set to true.

NoHostAvailable error When you restart all cassandra pods with the kubectl delete pod command, they should be available with no errors.

Problem After you restart all cassandra pods, log in to cassandra, and run a query, the NoHostAvailable error is displayed.

Resolution List the cassandra pods and restart one of them, as in the following example:

kubectl get pod |grep cass noi-cassandra-0 1/1 Running 0 75m noi-cassandra-1 1/1 Running 0 2m6s noi-cassandra-2 1/1 Running 0 76m kubectl delete pod noi-cassandra-1

Datasources are not persistent When you create a new datasource, on restart of the Dashboard Application Services Hub container (webgui), the datasource is no longer present.

About this task After you create a new datasource by clicking Administration > Datasources in the Dashboard Application Services Hub GUI, when you restart the Dashboard Application Services Hub container (webgui), the datasource is no longer present.

Procedure If you want to visualize the contents of an on-premises ObjectServer in a Web GUI instance running on a container platform, then set up a gateway between your on-premises ObjectServer and the ObjectServer running on your Netcool Operations Insight on Red Hat OpenShift deployment.

Customizations to the default Netcool/Impact are not persisted When Netcool Operations Insight is running on a container platform, any customizations to the default Netcool/Impact connection are not persisted. The default connection is recreated every time the webgui container is restarted.

Procedure To connect with a customized Netcool/Impact connection, you must create a new connection. Any additional Netcool/Impact connection that you create is preserved.

658 IBM Netcool Operations Insight: Integration Guide Note: This workaround requires you to connect to an external on-premises Netcool/Impact connection and not to the Netcool/Impact container in your Netcool Operations Insight on Red Hat OpenShift deployment.

Communication between the proxy server and the ObjectServer drops The proxy server and the object server on the Netcool Operations Insight on Red Hat OpenShift are disconnected after 15 minutes.

Problem The proxy server and the object server on the Netcool Operations Insight on Red Hat OpenShift are disconnected.

Resolution The default timeout value for communication to the Netcool Operations Insight on Red Hat OpenShift's ObjectServer is 15 minutes. You can modify the default value through the proxy server configmap. Set the connectionTimeoutMs value in milliseconds.

Restart of all Cassandra pods causes errors for connecting services When all the Cassandra pods go down simultaneously, the following error is displayed by the cloud native analytics user interface when the pods come back up:

An error occurred while fetching data from the server. The response from the server was '500'. Please try again later. kubectl get events also outputs a warning:

Warning FailedToUpdateEndpoint Endpoints Failed to update endpoint

Resolution Use the following procedure to resolve this problem. 1. Check the state of the Cassandra nodes. From the Cassandra container, use the Cassandra CLI nodetool, as in the following example:

kubectl exec -ti custom_resource-cassandra-0 bash [cassandra@m76-cassandra-0 /]$ nodetool status Datacenter: datacenter1 ======Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.1.106.37 636.99 KiB 256 100.0% d439ea16-7b55-4920- a9a3-22e878feb844 rack1

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install).. Note: If none of the nodes are in DN status, skip the scaling down steps and proceed to step 8, to restart the pods. 2. Scale Cassandra down to 0 instances with this command:

kubectl scale --replicas=0 StatefulSet/custom_resource-cassandra

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). 3. Use kubectl get pods | grep cass to verify that there are no Cassandra pods running. 4. Scale Cassandra back up to one instance.

Chapter 9. Troubleshooting Netcool Operations Insight 659 kubectl scale --replicas=1 StatefulSet/custom_resource-cassandra

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). 5. Use kubectl get pods | grep cass to verify that there is one Cassandra pod running. 6. Repeat step 3, incrementing replicas each time until the required number of Cassandra pods are running. Wait for each Cassandra pod to come up before incrementing the replica count to start another. 7. Verify that the cluster is running with this command:

kubectl exec -ti custom_resource-cassandra-0 bash [cassandra@m86-cassandra-0 /]$ nodetool status

where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). Expect to see UN for all nodes in cluster, as in this example:

Datacenter: datacenter1

Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.1.26.101 598.28 KiB 256 100.0% bbd34cab-9e91-45c1-bfcb-1fe59855d9b3 rack1 UN 10.1.150.13 654.78 KiB 256 100.0% 555f00c8-c43d-4962-a8a0-72eed028d306 rack1 UN 10.1.228.111 560.28 KiB 256 100.0% 8741a69b-acdb-4736-bc74-905d18ebdafa rack1

8. Restart the pods that connect to Cassandra with the following commands

kubectl delete custom_resource-ibm-hdm-analytics-dev-policyregistryservice kubectl delete custom_resource-ibm-hdm-analytics-dev-eventsqueryservice kubectl delete custom_resource-ibm-hdm-analytics-dev-archivingservice

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). 9. Relogin to the UI.

Load helm charts fails

Problem When the helm charts are loaded, the following error is displayed:

You have not accepted the terms of the license agreement. You must review and accept the terms by setting license=accept.

Resolution Ensure that license=accept is used when you run the load-helm-chart command, as in the following example:

cloudctl catalog load-helm-chart --archive charts/ibm-netcool-prod-2.1.1.tgz license=accept -- registry cluster-CA-domain:8500/namespace

where cluster-CA-domain is the name of the certificate authority domain that was used in the config.yaml file during your Netcool Operations Insight on Red Hat OpenShift installation.

660 IBM Netcool Operations Insight: Integration Guide StatefulSet pods with local storage are stuck in Pending state

Problem If you are using local storage and a node in the cluster goes down, then StatefulSet pods remain bound to that node. These pods are unable to restart on another node and are stuck in Pending state because they have a Persistent Volume on the node that is down.

Resolution The persistent volumes and persistent volume claims for these pods must be removed to allow the pods to be reassigned to other nodes. There is a script in the ITOM Developer Center that can be used to do this: http://../..//2019/10/31/cleanup_local_storage Run the script with the following command:

./cleanPVCandPV.sh

Unable to add new groups using WebSphere Application Server

About this task The following error message is displayed if you try to add a new group using WebSphere Application Server console:

CWWIM4520E The 'javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - object class 'groupOfNames' requires attribute 'member']

As a workaround, create the group in LDAP instead using the following procedure.

Procedure 1. Log in to the LDAP Proxy Server pod.

kubectl exec -it custom_resource-openldap-0 /bin/bash

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). 2. Create the new group a) Create an LDAP Data Interchange Format file to define the new group. For example:

vi test-group.ldif

b) Define the contents of the LDIF file that you created by using a format similar to this example:

dn: cn=newgroup,ou=groups,dc=mycluster,dc=icp cn: newgroup owner: uid=newgroup,ou=users,dc=mycluster,dc=icp description: newgroup test objectClass: groupOfNames member: uid=icpadmin,ou=users,dc=mycluster,dc=icp

Where: • the value of uid and cn are the name of the new group • the value of dc is the domain components that were specified for the suffix and baseDN. By default the value of this parameter is dc=mycluster,dc=icp. c) Run the following command to create the new group

Chapter 9. Troubleshooting Netcool Operations Insight 661 ldapadd -h localhost -p 389 -D "cn=admin,dc=mycluster,dc=icp" -w password -f ./test- group.ldif

New user does not inherit roles from assigned group

About this task When a new user is created and added to a group with WebSphere Application Server Console, the user is not assigned the roles for that group as they should be. You must add the roles that are required for that user with Web GUI. Note: This issue is fixed in version 1.6.0.2.

Procedure 1. Select Console Settings->User Roles and search for your user in Available Users. 2. Select the new user with the missing roles from the displayed results, and then select the required roles for your user.

Cannot launch WebSphere Application Server from Dashboard Application Services Hub on an OpenShift environment

About this task If your deployment is on a Red Hat OpenShift environment and you attempt to access Console Settings - >Websphere Application Server Console, then the following error is returned:

"502 Bad Gateway The server returned an invalid or incomplete response."

Use the following procedure as a workaround.

Procedure Access WebSphere Application Server directly with a URL in this format:

https://was.custom_resource.master_node_name:/ibm/console

Where • custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install). • master_node_name is the hostname of the master node. • port is the value that you specified for ingress_https_port in your configuration yaml file when you installed Netcool Operations Insight on Red Hat OpenShift.

Cassandra pods not binding to PVCs Cassandra PVCs are left with status 'Pending' when Cassandra PVs are not on the same local node.

Procedure

If local storage is used, the noi-cassandra-* and noi-cassandra-bak-* PVs must be on the same local node. Cassandra pods will fail to bind to their PVCs if this requirement is not met.

System goes into maintenance mode The system goes into maintenance mode.

Problem This situation can occur when you login to IBM Netcool/OMNIbus Web GUI as an icpadmin user and only the Console integration menu is displayed in the menu bar. This situation can also occur, when you login

662 IBM Netcool Operations Insight: Integration Guide as an smadmin user and try to add a role to a user (Console Settings > Users Roles). The following error message is displayed:

The system is in maintenance mode. Please contact your administrator or try again later.

Resolution This is a known issue. To force the system out of maintenance mode, first log in to the pod with the following command:

kubectl exec -ti {release name}-webgui-0 bash

Then, run the following command from the JazzSM/ui/bin/ directory:

consolecli.sh ForceHAUpdate --username console_admin_user_ID --password console_admin_password

The ForceHAUpdate command pushes the local configuration to the database and updates the modules table to match the local node's module versions. Notifications are sent to other nodes to synchronize. Notified nodes with module versions that match those of the originating node are synchronized. Notified nodes with module versions that do not match, go into maintenance mode until an administrator updates their modules accordingly. Restart the db2 pod by exiting the pod and then running the following command:

kubectl delete pod {release-name}-db2ese-0

Wait for the db2 pod to be running, then restart the webgui pod by running the following command:

kubectl delete pod {release-name}-webgui-0

Troubleshooting cloud-based operations Use the following troubleshooting information to resolve problems with cloud-based operations.

Pods cannot restart if secret expires

Problem If you created a Docker registry secret during deployment to enable Netcool Operations Insight to access the internal OpenShift image repository, then any pods that restart when this secret has expired will not have access to pull images, and ImagePullBackoff errors will occur.

Resolution To resolve this, you must recreate the secret that the pods rely on by using the following command:

oc create secret docker-registry noi-registry-secret --docker-username=docker_user --docker-password=docker_pass --docker-server=docker-reg.reg_namespace.svc:5000/namespace

Where: • noi-registry-secret is the name of the secret that you are creating. Suggested value is noi-registry- secret • docker_user is the Docker user, for example kubeadmin • docker_pass is the Docker password • docker-reg is your Docker registry • reg_namespace is the namespace that your Docker registry is in. • namespace is the name of the namespace that you want to deploy to.

Chapter 9. Troubleshooting Netcool Operations Insight 663 Missing topology analytics menus An initial user (icpadmin) is created with a new Netcool Operations Insight on Red Hat OpenShift installation, to log in to the Cloud GUI and Web GUI. When logging in to Web GUI with that user, the topology analytics menus are missing from the navigation (for example, Incident > Agile Service Management Incident > Topology viewer).

About this task Complete the following steps to view all menus:

Procedure 1. Go to Web GUI > Console Settings > User Roles. 2. Search for your user. 3. Add the inasm_admin role. 4. Log out of Web GUI and log back in again.

Inference Service fail to re-establish a connection to Kafka When Kafka is restarted, the inference service fails to re-establish a connection to the Kafka brokers.

Problem Messages similar to the below are seen in the custom_resource-ibm-hdm-analytics-dev-inferenceservice- pod's logs when this failure occurs:

WARN [2020-04-21 15:02:54,079] org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Connection to node 0 (/10.254.23.10:9092) could not be established. Broker may not be available. WARN [2020-04-21 15:02:57,151] org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Connection to node 2 (/10.254.3.51:9092) could not be established. Broker may not be available. WARN [2020-04-21 15:03:00,223] org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Connection to node 1 (/10.254.27.165:9092) could not be established. Broker may not be available.

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install).

Resolution To resolve this issue, restart the inference service pod, so that it reconnects to the new Kafka instance.

oc delete pod custom_resource-ibm-hdm-analytics-dev-inferenceservice-

Where custom_resource is the name of your deployment, as specified by the value used for name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml /noi.ibm.com_nois_cr.yaml (CLI install).

No restart for dedup-aggregationservice due to OOMkilled No new actions or new groupings are displayed for events in the Event Viewer because the de-duplicator goes in to crashloopbackoff or fails to restart.

Problem When running the following command:

kubectl describe po -l app.kubernetes.io/component=dedup-aggregationservice

You might see the following output:

Reason: OOMKilled Exit Code: 0

A message similar to the following can be seen in the kubectl logs on the de-duplicator:

664 IBM Netcool Operations Insight: Integration Guide {"name":"clients.kafka","hostname":"pvt-ibm-hdm-analytics-dev-dedup-aggregationservice- b4df85bmsd4c","pid":17,"level":30,"brokerStates": {"0":"UP","1":"UP","2":"UP","-1":"UP"},"partitionStates":{"ea-actions.0":"UP","ea- actions.1":"UP","ea-actions.2":"UP","ea-actions.3":"UP","ea-actions.4":"UP","ea- actions.5":"UP"},"msg":"lib-rdkafka status","time":"2020-02-28T09:55:11.258Z","v":0} {"name":"clients.kafka","hostname":"pvt-ibm-hdm-analytics-dev-dedup-aggregationservice- b4df85bmsd4c","pid":17,"level":50,"err":{"message":"connect ETIMEDOUT","name":"Error","stack":"Error: connect ETIMEDOUT\n at Socket. (/app/ node_modules/ioredis/built/redis/index.js:275:31)\n at Object.onceWrapper (events.js:286:20)\n at Socket.emit (events.js:198:13)\n at Socket._onTimeout (net.js:442:8)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)","code":"ETIMEDOUT"},"msg":"Error from redis client","time":"2020-02-28T09:55:11.955Z","v":0}

Resolution As a workaround, run the following command:

kubectl -L redis-role get pod | grep redis

You should have one master node. If there is no master node or there is more than one master node, you must scale down to 0 the redis server statefulset and then scale it back up to 3.

Data fetch error When you fetch seasonality data in the Incident Viewer, an error is displayed.

Problem The following error occurs when fetching seasonality data in the Incident Viewer.

An error occurred while fetching data from the server. Please make sure you have an active internet connection. Code: FETCH_ERROR_NETWORK

This error happens when there is a cookie conflict with the IBM WebSphere Application Server cookie, LTPAToken2.

Resolution To work around this issue, clear your browser cookies.

New view not available When you add a view in the Event Viewer, the view should be available in the drop-down list.

Problem After you add a new view and click investigate on a parent event in the Event Viewer, the new view is not displayed in the view drop-down list in the Incident Viewer.

Resolution This is a known issue. To display a new view in the list, it must contain all of the following fields as display columns, sort columns, or relationships: • ParentIdentifier • Identifier • CEACorrelationDetails • CEASeasonalDetails

Chapter 9. Troubleshooting Netcool Operations Insight 665 Pod stops and doesn't recover The noi-proxy pod stops unexpectedly and doesn't recover automatically.

Problem This problem occurs on a Netcool Operations Insight on Red Hat OpenShift deployment.

Solution Restart the pod to solve the issue.

Troubleshooting a hybrid deployment Use this information to support troubleshooting of errors that might occur in a hybrid deployment of cloud native Netcool Operations Insight components on Red Hat OpenShift with an on-premises installation of Netcool Operations Insight. You can also look in the troubleshooting sections for cloud and on-premises systems.

Errors on command line after installation If you receive the command line response Errors when you run the kubectl get command following a Cloud or Hybrid deployment, then you must run a series of checks to determine if there is an error, and where that error is.

Symptom Following a Cloud or Hybrid deployment, run a command similar to the following:

kubectl get my_deployment noi -o yaml

Where my_deployment is the name of your deployment. If you see the following response fragment on the command line, then you need to investigate further:

status: phase: Error

Investigating the problem Run the following commands to investigate further.

Table 159. Further investigation Item to check Command Ensure all jobs are in the correct oc get jobs --all-namespaces state. Ensure correct version. kubectl describe noi om193 | egrep ^build Check status of important noi, noitransformations, nohybrid parameters; some examples are given in the cell to the right. Check noi_operator pods. kubectl logs noi-operator-566b845789-kzfrc If there are noi_operator pod kubectl describe pod noi-operator-566b845789-kzfrc | egrep startup issues, then get more Events -A 100 details. kubectl get events

Extra checks for OLM (Operator kubectl get catalogsource --all-namespaces Lifecycle Manager) deployments. kubectl get subscription --all-namespaces

666 IBM Netcool Operations Insight: Integration Guide Error sorting columns in Event Viewer on hybrid deployments Changing the on-premises ObjectServer in a hybrid installation causes errors because the new ObjectServer does not have the columns and triggers that are required by cloud native analytics.

About this task When a hybrid installation of on-premises Operations Management and cloud native Netcool Operations Insight components is installed, the cloud native analytics components creates new columns and triggers on the on-premises ObjectServer. If you change the on-premises ObjectServer that cloud native analytics is pointing to, or re-create your on-premises ObjectServer, then these columns and triggers are missing. These missing columns are:

ScopeID AsmStatusId CEACorrelationKey CEACorrelationDetails CEAIsSeasonal CEASeasonalDetails CEAAsmStatusDetails NormalisedAlarmName NormalisedAlarmGroup NormalisedAlarmCode ParentIdentifier QuietPeriod CustomText JournalSent SiteName CauseWeight ImpactWeight TTNumber

This causes sorting by seasonal or temporal groups in the Event Viewer to fail with the following error: "Event dataset error HEMDP0389E: e is undefined. Recovery will be attempted automatically on the next refresh." To integrate your hybrid deployment with a different on-premises ObjectServer to the one that it was originally configured with, you must update the on-premises ObjectServer's schema with the columns that are required by cloud native analytics.

Procedure 1. Find the pod name for the cloud native analytics action service.

kubectl get pods -l app.kubernetes.io/component=eanoiactionservice

2. Get the ObjectServer schema changes that are required by cloud native analytics from the cloud native analytics action service pod.

kubectl cp actionservice_pod:ea-noiomnibus-config/objectserver/cea_aggregation_schema.sql cea_aggregation_schema.sql

Where actionservice_pod is the name of the cloud native analytics action service pod that was retrieved in step 1. 3. Get the ObjectServer triggers that are required by cloud native analytics from the cloud native analytics action service pod.

kubectl cp actionservice_pod:ea-noiomnibus-config/objectserver/cea_aggregation_triggers.sql cea_aggregation_triggers.sql

Where actionservice_pod is the name of the cloud native analytics action service pod that was retrieved in step 1. 4. Copy the extracted SQL files, cea_aggregation_schema.sql and cea_aggregation_triggers.sql, to the servers that are hosting your primary and backup ObjectServers. 5. Run cea_aggregation_schema.sql on the primary ObjectServer.

Chapter 9. Troubleshooting Netcool Operations Insight 667 $OMNIHOME/bin/nco_sql -U root -P password -S OSname < cea_aggregation_schema.sql

Where • password is the password for the root user on the ObjectServer. • OSname is the name of the ObjectServer. 6. Run cea_aggregation_triggers.sql on the primary ObjectServer.

$OMNIHOME/bin/nco_sql -U root -P password -S OSname < cea_aggregation_triggers.sql

Where • password is the password for the root user on the ObjectServer. • OSname is the name of the ObjectServer. 7. Run cea_aggregation_schema.sql on the backup ObjectServer.

$OMNIHOME/bin/nco_sql -U root -P password -S OSname < cea_aggregation_schema.sql

Where • password is the password for the root user on the ObjectServer. • OSname is the name of the ObjectServer. 8. Run cea_aggregation_triggers.sql on the backup ObjectServer.

$OMNIHOME/bin/nco_sql -U root -P password -S OSname < cea_aggregation_triggers.sql

Where • password is the password for the root user on the ObjectServer. • OSname is the name of the ObjectServer.

Console integration fails in hybrid deployment Console integration is failing in a hybrid deployment due to missing certificate in Dashboard Application Services Hub.

About this task When your hybrid deployment is complete, under Console Settings -> Console Integration in Dashboard Application Services Hub there is an entry that is called Cloud Analytics. If this console integration is not present, or the connection to it is failing, then this might be because the TLS certificate was not imported from the cloud native Netcool Operations Insight components deployment correctly.

Procedure Re-import the TLS certificate for cloud native analytics on Red Hat OpenShift into your on-premises Dashboard Application Services Hub installation with the following command:

JazzSM_Profile_home/bin/wsadmin.sh -lang jython -username was_admin_user -password was_admin_password -f integ_kit_dir/dash-authz/scripts/importTLSCertificate.py CNEA_cluster_url

Where • was_admin_user is the WebSphere administrative user, for example smadmin. • was_admin_password is the password for the WebSphere administrative user. • integ_kit_dir is the directory where your Netcool Hybrid Deployment Option Integration Kit is installed. • CNEA_cluster_URL is the master node of the cluster on which Cloud Native Analytics is deployed, for example https://netcool.noi.apps.abc102-ocp42.os.fyre.xyz.com.

668 IBM Netcool Operations Insight: Integration Guide Manage Policies screen has broken policy hyperlinks

Problem The Manage Policies screen has broken hyperlinks for the policies. This is caused by the links changing, and the UI caching the previous hyperlinks.

Resolution To resolve this, clear the browser cache and refresh the page.

Event Viewer has missing analytics icons

About this task Analytics icons are missing on the Event Viewer, even though there are active live policies that have triggered. This is caused by problems with the ea-noi-layer-eanoigateway pod on the cloud native Netcool Operations Insight components deployment.

Procedure 1. Find the name of your ea-noi-layer-eanoigateway pod with the following command:

kubectl get pod | grep releasename-ea-noi-layer-eanoigateway

where releasename is the name of the custom resource for your cloud native Netcool Operations Insight components deployment. 2. Restart the ea-noi-layer-eanoigateway pod with the following command:

kubectl delete pod ea-noi-layer-eanoigateway_pod_name

where ea-noi-layer-eanoigateway_pod_name is the pod name that was returned in step 1.

Error when you try to open Incident Viewer (hybrid HA)

Problem If you deploy the Netcool Hybrid Deployment Option Integration Kit on a HA hybrid deployment but forget to update NetcoolOAuthProvider.xml, then the following error message is displayed if you attempt to open the Incident viewer or other cloud native analytics pages in the on-premises DASH deployment:

{"code":0,"message":"expected 200 OK, got: 401 Unauthorized","level":"fatal"}

Resolution Ensure that these extra instructions for a HA hybrid installation: Hybrid (High Availability mode) were followed. Step 5 has instructions on how to update your NetcoolOAuthProvider.xml file, and this step must be completed following each redeployment of the Netcool Hybrid Deployment Option Integration Kit.

Runbook automation and Netcool/Impact integration: ObjectServer maximum row size error This error might be encountered when running the IBM Tivoli Netcool/Impact V7.1.0.19 rba_objectserver_fields_updateFP19.sql script to update the ObjectServer with the new

Chapter 9. Troubleshooting Netcool Operations Insight 669 RunbookIDArray field. The script is run as part of the runbook automation and Netcool/Impact integration steps.

Symptoms When running the script on the ObjectServer, the following error is displayed:

./nco_sql -username root -password password -server AGG_P < rba_objectserver_fields_updateFP19.sql ERROR=Row size too large on line 34 of statement '------...', at or near 'status'

The new RunbookIDArray field is not added when this error occurs.

Causes The ObjectServer is running close to the maximum row size of 64KB. As a result, it does not allow the new column of varchar(2048) to be added and the error message above is generated.

Resolving the problem Try reducing the varchar to something smaller, for example (1024). Note, this might also reduce the testing capacity.

Troubleshooting Netcool Operations Insight on premises Use this information to troubleshoot problems that may occur when installing or using Netcool Operations Insight on premises.

JVM memory usage at capacity If JVM memory usage is at capacity, then you must increase the maximum heap size.

About this task If you see an alert similar to

Summary ALERT:JVM memory usage is 768 MB of capacity 768 MB: 100%

then you must increase the WebSphere Application Server Java heap size.

Procedure 1. As the administrative user, log in to Dashboard Application Services Hub. If you use the default root location of /ibm/console, the URL is in the following format: https://:/ibm/ console/logon.jsp. For example, https://myserver.domain.com:16311/ibm/console/logon.jsp. 2. From the Console Settings menu, select WebSphere Administration Console. 3. Select Servers->Server Types->WebSphere application servers, and click your server name, for example server1. 4. Select Java and process management > Process definition > Java virtual machine. 5. There are two associated settings that can be changed: Initial Heap Size and Maximum Heap Size. Increase the value for the required setting, depending on your issue. 6. Select Apply->Save. 7. Restart Dashboard Application Services Hub on your Netcool Operations Insight on premises installation with the following command:

cd JazzSM_Profile/bin ./stopServer.sh server1 -username smadmin -password password ./startServer.sh server1

670 IBM Netcool Operations Insight: Integration Guide Troubleshooting Event Analytics (on premises) Use the following troubleshooting information to resolve problems with Event Analytics. If your problem is not listed in this topic, then refer to the Release notes for additional issues.

Analytics data Use the following troubleshooting information to resolve problems with Event Analytics analytics data.

Two or more returned seasonal events appear to be identical It is possible for events to have the same Node, Summary, and Alert Group but a different Identifier. In this scenario, the event details of two (or more) events can appear to be identical because the Identifier is not displayed in the details.

Error displaying Seasonal Event Graphs in Microsoft Internet Explorer browser The Seasonal Event Graphs do not display in a Microsoft Internet Explorer browser. This problem happens because Microsoft Internet Explorer requires the Microsoft Silverlight plug-in to display the Seasonal Event Graphs. To resolve this problem, install the Microsoft Silverlight plug-in.

Within the Event Viewer, you are unable to view seasonal events and error ATKRST103E is logged When you complete the following type of steps, then within the event viewer the seasonal events are not viewable and error ATKRST103E is logged. 1. Open the event viewer and select to edit the widget from the widget menu. 2. From the list on the edit screen, select the Impact Cluster data provider. 3. Select to view either the seasonality report and the report name. 4. Save the configuration. To resolve the problem, view seasonal events by using the provided seasonal events pages and view related events parent to child relationships by using the Tivoli Netcool/OMNIbus data provider.

Event relationships display in the Event Viewer, only if the parent and child events match the filter The Event Viewer is only able to show relationships between events if the parent and the child events are all events that match the filter. There are some use cases for related events where parent or child events might not match the filter. Background Netcool/OMNIbus Web GUI is able to show the relationships between events in the Event Viewer, if the Event Viewer view in use has an associated Web GUI relationship. This relationship defines which field in an event contains the identifier of the event's parent, and which field contains the identifier for the current event. For more information about defining event relationships, see http:// www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/ webtop/wip/task/web_cust_jsel_evtrelationshipmanage.html . The relationship function works from the set of events that are included in the event list, and the event list displays the events that match the relevant Web GUI filter. See the following example. If you have a filter that is called Critical to show all critical events, the filter clause is Severity = 5, then relationships between these events are shown provided the parent and child events in the relationships all have Severity = 5. If you have a parent event that matches the filter Severity = 5 but has relationships to child events that have a major severity Severity = 4, these child relations are not seen in the event list because the child events do not match the filter. Furthermore, these child relations are not included in the set of events that are returned to the Event Viewer by the server.

Chapter 9. Troubleshooting Netcool Operations Insight 671 Resolution To resolve this problem, you must define your filter with appropriate filter conditions that ensures that related events are included in the data that is returned to the Event Viewer by the server. The following example builds on the example that is used in the Background section. 1. Make a copy of the Critical filter and name the copy CriticalAndRelated. You now have two filters. Use the original filter when you want to see only critical events. You use the new filter to see related events, even if events are not critical. 2. Manually modify the filter condition of the CriticalAndRelated filter to include the related events. To manually modify this filter condition, use the advanced mode of the Web GUI filter builder. The following example conditions are based on the current example. The main filter condition is Severity = 5. In an event, the field that denotes the identifier of the parent event is called ParentIdentifier. The value of the ParentIdentifier field, where populated, is the Identifier of an event. If ParentIdentifier is 0, this value is a default value and does not reference another event. • Including related child events. To include events that are the immediate child events of events that match the main filter, set this filter condition.

Severity = 5 OR ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE Severity = 5)

• Including related parent events. To include events that are the immediate parent of events that match the main filter, set this filter condition.

Severity = 5 OR Identifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5)

• Including related sibling events. To include events that are the other child events of the immediate parents of the event that matches the main filter (the siblings of the events that match the main filter), set this filter condition.

Severity = 5 OR ParentIdentifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5 AND ParentIdentifier > 0)

• Including related parents, children, and siblings together. Combine the previous types of filter conditions so that the new CriticalAndRelated filter retrieves critical events, and the immediate children of the critical events, and the immediate parents of the critical events, and the immediate children of those parent events (the siblings). You must have this filter condition.

Severity = 5 OR ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE Severity = 5) OR Identifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5) OR ParentIdentifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5 AND ParentIdentifier > 0)

• Including related events that are more than one generation away. In the previous examples, the new filter conditions go up to only one level, up or down, from the initial set of critical events. However, you can add more filter conditions to retrieve events that are more than one generation away from the events that match the main filter. If you want to retrieve grandchildren of the critical events (that is, two levels down from the events that match the main filter condition) and immediate children, set this filter condition.

-- The initial set of Critical events Severity = 5 OR

672 IBM Netcool Operations Insight: Integration Guide -- Children of the Critical events ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE Severity = 5) -- Children of the previous "child events" OR ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE Severity = 5) )

Use a similar principal to retrieve parent events that are two levels up, and siblings of the parent events. To pull this scenario together, set this filter condition.

-- The initial set of Critical events Severity = 5

OR

-- Children of the Critical events ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE Severity = 5)

OR

-- Children of the previous "child events" ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE ParentIdentifier IN (SELECT Identifier FROM alerts.status WHERE Severity = 5) )

OR

-- Parents of the Critical events Identifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5)

OR

-- Parents of the previous "parent events" Identifier IN (SELECT ParentIdentifier from alerts.status WHERE Identifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5) ) OR

-- Other children of the Critical events' parents ParentIdentifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5 AND ParentIdentifier > 0)

OR

-- Other children of the Critical events' grandparents ParentIdentifier IN (SELECT ParentIdentifier from alerts.status WHERE Identifier IN (SELECT ParentIdentifier from alerts.status WHERE Severity = 5 AND ParentIdentifier > 0) AND ParentIdentifier > 0)

You can continue this principal to go beyond two levels in the hierarchy. However, with each additional clause the performance of the query degrades due to the embedded subquerying. Therefore, there might be a practical limit to how far away the related events can be.

Troubleshooting Event Analytics configuration Use the following troubleshooting information to resolve problems with Event Analytics configurations.

Fields missing from the Historical Event Database. This situation typically occurs when you switch from one Historical Event Database to another, with different fields. Following this change, fields that were in use by Event Analytics, whether standard fields such as Node, Summary, or Acknowledged, or aggregate fields, are no longer present in the new Historical Event Database. In this case, when you reach the Configure Analytics > Report fields screen, you encounter a blank screen. To resolve this problem, perform the following steps: 1. Complete one of the following options: • Add the missing field to the Historical Event Database. • Use a database view instead of the Historical Event Database table to add a dummy field for the missing aggregate field. For more information about creating a database view for the Event Analytics wizard, see “Mapping customized field names” on page 294.

Chapter 9. Troubleshooting Netcool Operations Insight 673 2. Open the Event Analytics wizard. and depending on the type of field you added, perform one of the following actions: • If you added a standard field, then no action is required. • If you added an aggregate field, and this field is no longer needed, then you can delete it. Now save the configuration.

Seasonal Event configuration stops running before completion. The Seasonal Event configuration does not complete running. No errors are displayed. The report progress does not increase. This problem occurs if a Seasonal Event Report is running when the Netcool/Impact back-end server goes offline while the Impact UI server is still available. No errors are displayed in the Impact UI and no data is displayed in the widgets/dashboards. To resolve this problem, ensure that the Netcool/Impact servers are running. Edit and rerun the Seasonal Event Report.

Incomplete, stopped, and uninitiated configurations Configuration run operations do not complete, are stalled on the Configure Analytics window, or fail to start. These problems occur if the services are not started after Event Analytics is installed, or the Netcool/ Impact server is restarted. To resolve these problems, complete the following steps. 1. In the Netcool/Impact UI, select the Impact Services tab. 2. Ensure that each of the following services is started. To start a service, right-click the service and select Start. LoadRelatedEventPatterns ProcessClosedPatternInstances ProcessPatternGroupsAllocation ProcessRelatedEventConfig ProcessRelatedEventPatterns ProcessRelatedEventTypes ProcessRelatedEvents ProcessSeasonalityAfterAction ProcessSeasonalityConfig ProcessSeasonalityEvents ProcessSeasonalityNonOccurrence UpdateSeasonalityExpiredRules

Event Analytics: Configurations fail to run due to event count queries that take too long. Configurations fail to run due to large or unoptimized datasets that cause the Netcool/Impact server to timeout and reports fails to complete. To resolve this issue, increase the Netcool/Impact server timeout value to ensure that the Netcool/ Impact server processes these events before it times out. As a result of increasing this server timeout value, the Netcool/Impact server waits for the events to be counted, thus ensuring that the reports complete and display in the appropriate portlet. Edit the Netcool/Impact impact.server.timeout value, at

$IMPACT_HOME/etc/ServerName_server.props

674 IBM Netcool Operations Insight: Integration Guide By default, the impact.server.timeout property is set to 120000 milliseconds, which is equal to 2 minutes. The recommendation is to specify a server timeout value of at least 5 minutes. If the issue continues, increase the server timeout value until the reports successfully complete and display in the appropriate portlet.

Running a Seasonal Event configuration displays an error message Error creating report. Seasonality configuration is invalid The Seasonal Event configuration does not run. An error message is displayed. Error creating report. Seasonality configuration is invalid. Verify settings and retry. This problem occurs when Event Analytics is not correctly configured before you run a Seasonal Event Report. To resolve this problem, review the Event Analytics installation and configuration guides to ensure that all of the prerequisites and configuration steps are complete. Also, if you use a table name that is not the standard REPORTER_STATUS, you must verify the settings that are documented in the following configuration topics. “Configuring Db2 database connection within Netcool/Impact” on page 273 “Configuring Oracle database connection within Netcool/Impact” on page 275 “Configuring MS SQL database connection within Netcool/Impact” on page 277

Seasonality and Related Event configuration runs time out when you use large data sets Before the seasonality policy starts to process a report, the seasonality policy issues a database query to find out how many rows of data need to be processed. This database query has a timeout when the database contains many rows and the database is not tuned to process the query. Within the /logs/impact_server.log file, the following message is displayed.

02 Sep 2014 13:00:28,485 ERROR [JDBCVirtualConnectionWithFailOver] JDBC Connection Pool recieved error trying to connect to data source at: jdbc:db2://localhost:50000/database 02 Sep 2014 13:02:28,500 ERROR [JDBCVirtualStatement] JDBC execute failed twice. com.micromuse.common.util.NetcoolTimeoutException: TransBlock [Executing SQL query: select count(*) as COUNT from Db2INST1.PRU_REPORTER where ((Severity >= 4) AND ( FIRSTOCCURRENCE > '2007- 09-02 00:00:00.000' )) AND ( FIRSTOCCURRENCE < '2014-09-02 00:00:00.000')] timed out after 120000ms.

Check that you have indexes for the FIRSTOCCURRENCE field and any additional filter fields that you specified, for example, Severity. Use a database tuning utility, or refresh the database statistics, or contact your database administrator for help. Increase the impact.server timeout to a value greater than the default of 120s, see https://www.ibm.com/support/pages/node/726337/ .

Seasonal or related events configurations hang, with error ATKRST132E When you start cluster members, replication starts and the Netcool/Impact database goes down. Any running seasonality reports or related events configurations hang and this error message is logged in the Netcool/Impact server log.

ATKRST132E An error occurred while transferring a request to the following remote provider: 'Impact_NCICLUSTER.server.company.com'. Error Message is 'Cannot access data provider - Impact_NCICLUSTER.server.company.com'.

To resolve this problem, do a manual restart or a scheduled restart of the affected reports or configurations.

Chapter 9. Troubleshooting Netcool Operations Insight 675 Event Analytics configuration Finished with Warnings The seasonality report or related events configuration completes with a status of Finished with Warnings. This message indicates that a potential problem was detected but it is not of a critical nature. You should review the log file for more information ($NCHOME/logs/impactserver.log). The following is an example of a warning found in impactserver.log:

11:12:38,366 WARN [NOIProcessRelatedEvents] WARNING: suggested pattern : RE-sqa122-last36months-Sev3-Default_Suggestion4 includes too many types, could be due to configuration of types/patterns. The size of the data execeeded the column limit. The pattern will be dropped as invalid.

Event Analytics configuration Finished with Errors One reason for an Event Analytics configuration to complete with a status of Finished with Errors is because the suggested patterns numbering is not sequential. This can be because, for example, the pattern type found is invalid or the string is too long to be managed by the Derby database. You should review the log file for more information ($NCHOME/logs/impactserver.log).

Export Use the following troubleshooting information to resolve problems with Event Analytics export operations.

Export of Event Analytics reports causes log out of DASH If Netcool/Impact and DASH are installed on the same server, a user might be logged out of DASH when exporting Event Analytics reports from DASH. The problem occurs when the Download export result link is clicked in DASH. A new browser tab is opened and the DASH user is logged out from DASH. To avoid this issue, configure SSO between DASH and Netcool/Impact. For more information, see https:// www.ibm.com/support/knowledgecenter/SSSHYH_7.1.0.19/com.ibm.netcoolimpact.doc/admin/ imag_configure_single_signon.html .

Failover Use the following troubleshooting information to resolve failover problems with Event Analytics.

Configuring Netcool/Impact for ObjectServer failover Netcool/Impact does not process new events for Event Analytics after ObjectServer failover. Seasonal event rule actions are not applied if the Netcool/Impact server is not configured correctly for ObjectServer failover as new events are processed. For example, if a seasonal event rule creates a synthetic event, the synthetic event does not appear in the event list, or if a seasonal event rule changes the column value for an event, the value is unchanged. This problem occurs when Netcool/Impact is incorrectly configured for ObjectServer failover. To resolve this problem, extra Netcool/Impact configuration is required for ObjectServer failover. To correctly configure Netcool/Impact, complete the steps in the Managing the OMNIbusEventReader with an ObjectServer pair for New Events or Inserts topic in the Netcool/Impact documentation: https:// www.ibm.com/support/knowledgecenter/SSSHYH_7.1.0.19/com.ibm.netcoolimpact.doc/common/dita/ ts_serial_value_omnibus_eventreader_failover_failback.html When configured, Netcool/Impact uses the failover ObjectServer to process the event.

Patterns Use the following troubleshooting information to resolve problems with Event Analytics patterns.

The pattern displays 0 groups and 0 events The events pattern that is created and displayed in the Group Sources table in the View Related Events portlet displays 0 groups and 0 events

676 IBM Netcool Operations Insight: Integration Guide The pattern displays 0 groups and 0 events for one of the following reasons. • The pattern creation process is not finished. The pattern creation process can take a long time to complete due to large datasets and high numbers of suggested patterns. • The pattern creation process was stopped before it completed. To confirm the reason that the pattern displays 0 groups and 0 events, complete the following steps. 1. To confirm that the process is running, a. Append the policy name to the policy logger file from the Services tab, Policy Logger service. For more information about configuring the Policy logger, see https://www.ibm.com/support/ knowledgecenter/SSSHYH_7.1.0.19/com.ibm.netcoolimpact.doc/user/ policy_logger_service_window.html . b. Check the following log file.

$IMPACT_HOME/logs/_policylogger_PG_ALLOCATE_PATTERNS_GROUPS.log

If the log file shows that the process is running, wait for the process to complete. If the log file shows that the process stopped without completing, proceed to step 2. 2. To force reallocation for all configurations and patterns run the PG_ALLOCATE_PATTERNS_GROUPS_FORCE from Global projects policy with no parameters from the UI. 3. Monitor the $IMPACT_HOME/logs/ _policylogger_PG_ALLOCATE_PATTERNS_GROUPS_FORCE.log log file to track the completion of the process.

Event pattern with the same criteria already exists (error message) An error message is displayed if you create a pattern that has a duplicate pattern criterion selected. Check the following log file to determine which pattern is the duplicate:

$IMPACT_HOME/logs/_policylogger_PG_SAVEPATTERN.log

Performance Use the following troubleshooting information to resolve performance problems with Event Analytics.

Improving Event Analytics performance due to large search results If you are performing an upgrade of Event Analytics from an earlier version, the upgrade repopulates the existing data from the previous version and aligns this data with the new schema, tables, and views. It is possible that you might see degradation in the performance of Event Analytics operations. Examples of degradation in performance include but are not limited to: • Reports can hang. • Reports complete, but no data is displaying for seasonal events. To improve any degradation in the performance of Event Analytics operations due to the upgrade to 1.3.1 or later releases, run the SE_CLEANUPDATA policy as follows: 1. Log in to the server where IBM Tivoli Netcool/Impact is stored and running. You must log in as the administrator (that is, you must be assigned the ncw_analytics_admin role). 2. Navigate to the policies tab and search for the SE_CLEANUPDATA policy. 3. Open this policy by double-clicking it. 4. Select to run the policy by using the run button on the policy screen toolbar. The SE_CLEANUPDATA policy cleans up the data. Specifically, the SE_CLEANUPDATA policy: • Does not remove or delete any data from the results tables. The results tables hold all the original information about the analysis.

Chapter 9. Troubleshooting Netcool Operations Insight 677 • Provides some additional views and tables on top of the original tables to enhance performance. • Combines some information from related events, seasonal events, rules, and statistics. • Cleans up only the additional tables and views.

Related Event Details page is slow to load To avoid this problem, create an index on the Event History Database for the SERVERSERIAL and SERVERNAME columns.

create index myServerIndex on Db2INST1.REPORTER_STATUS (SERVERSERIAL , SERVERNAME )

It is the responsibility of the database administrator to construct (and maintain) appropriate indexes on the REPORTER history database. The database administrator should review the filter fields for the reports as a basis for an index, and should also review if an index is required for Identity fields.

Export of large Related Event configuration fails The export a configuration with more then 2000 Related Event groups fails. An error message is displayed. Export failed. An invalid response was received from the server. To resolve this issue, increase the Java Virtual Machine memory heap size settings from the default values. For Netcool/Impact the default value of the Xmx is 2400 MB. In JVM, Xmx sets the maximum memory heap size. To improve performance, make the heap size larger than the default setting of 2400 MB. For details about increasing the JVM memory heap size, see https://www.ibm.com/support/ knowledgecenter/SSSHYH_7.1.0.19/com.ibm.netcoolimpact.doc/admin/ imag_monitor_java_memory_status_c.html .

Troubleshooting Event Search Use this information to support troubleshooting of errors that might occur when deploying or using Event Search.

Troubleshooting event search How to resolve problems with your event search configuration. • “You must log in each time you switch between interfaces” on page 678 • “Operations Analytics - Log Analysis session times out after 2 hours” on page 679 • “Launch to Operations Analytics - Log Analysis fails on Firefox in non-English locales” on page 679 • “Right-click tool fail to start Operations Analytics - Log Analysis from event lists” on page 679 • “Error message displayed when dynamic dashboard is run” on page 680 • “Error message displayed on Show event dashboard by node tool from event lists” on page 680 • “Chart display in Operations Analytics - Log Analysis changes without warning” on page 680 • “addIndex script error: Unexpected ant version” on page 680 • “addIndex script error: Duplicate fields in template file” on page 681

You must log in each time you switch between interfaces The problem occurs if single sign-on (SSO) is not configured. If the Web GUI and Operations Analytics - Log Analysis are on the same host computer, you must log in each time you switch between the interfaces in your browser. This problem happens because each instance of WebSphere Application Server uses the same default name for the LTPA token cookie: LtpaToken2. When you switch between the interfaces, one WebSphere Application Server instance overwrites the cookie of the other and your initial session is ended. The ways of resolving this problem are as follows:

678 IBM Netcool Operations Insight: Integration Guide • Customize the domain name in the Web GUI SSO configuration: 1. In the administrative console of the WebSphere Application Server that hosts the Web GUI, click Security > Global security. Then, click Authentication > Web security and click Single sign-on (SSO).. 2. Enter an appropriate domain name for your organization, for example, abc.com. By default, the domain name field is empty and the cookie's domain is the host name. If you also customize the domain name in the Operations Analytics - Log Analysis WebSphere Application Server, to avoid any conflict ensure that the two domain names are different. 3. Restart the Dashboard Application Services Hub server. • Use the fully qualified domain name for accessing one instance of WebSphere Application Server and the IP address for accessing the other. For example, always access the Web GUI by the fully qualified domain name and always access Operations Analytics - Log Analysis by the IP address. To configure the Web GUI to access Operations Analytics - Log Analysis by the IP address: 1. In the $WEBGUI_HOME/etc/server.init file, change the value of the scala.url property to the IP address of the host, For example:

https://3.127.46.125:9987/Unity

2. Restart the Dashboard Application Services Hub server. See http://www-01.ibm.com/support/ knowledgecenter/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/ web_adm_server_restart.html.

Operations Analytics - Log Analysis session times out after 2 hours This problem occurs if SSO is not configured. The first time that you start the Operations Analytics - Log Analysis product from an event list in the Web GUI, you are prompted to log in to Operations Analytics - Log Analysis. You are automatically logged out after 2 hours and must reenter your login credentials every 2 hours. This problem occurs because the default expiration time of the LTPA token is 2 hours. To resolve this problem, change the session timeout in the Operations Analytics - Log Analysis product as follows: 1. In the $SCALA_HOME/wlp/usr/servers/Unity/server.xml file, increase the value of the attribute to the required value, in minutes. For example, to change the session timeout to 540 minutes:

2. Restart the Operations Analytics - Log Analysis WebSphere Liberty Profile.

Launch to Operations Analytics - Log Analysis fails on Firefox in non-English locales This problem is a known issue when you launch from the Active Event List (AEL) into the Firefox browser. If your browser is set to a language other than US English (en_us) or English (en), you might not be able to launch into Operations Analytics - Log Analysis from the Web GUI AEL. This problem happens because Operations Analytics - Log Analysis does not support all the languages that are supported by Firefox. To work around this problem, try setting your browser language to an alternative language version. For example, if the problem arises when the browser language is French[fr], set the language to French[fr-fr]. If the problem arises when the browser language is German[de-de], set the language to German[de].

Right-click tool fail to start Operations Analytics - Log Analysis from event lists The following error is displayed when you start the tools from the right-click menu of an event list: CTGA0026E: The APP name in the query is invalid or it does not exist

Chapter 9. Troubleshooting Netcool Operations Insight 679 This error occurs because the custom app that is defined in the $WEBGUI_HOME/etc/server.init file does not match the file names in the Tivoli Netcool/OMNIbus Insight Pack. To resolve this problem, set the scala.app.keyword and scala.app.static.dashboard properties in the server.init file accordingly. • If the properties are set as follows, the version of the Insight Pack needs to be V1.1.0.2:

scala.app.keyword=OMNIbus_Keyword_Search scala.app.static.dashboard=OMNIbus_Static_Dashboard

• If the properties are set as follows, the version of the Insight Pack needs to V1.1.0.1 or V1.1.0.0:

scala.app.keyword= OMNIbus_SetSearchFilter scala.app.static.dashbaord=OMNIbus_Event_Distribution

If you need to change the values of these properties, restart the Dashboard Application Services Hub server afterwards. See http://www-01.ibm.com/support/knowledgecenter/SSSHTQ_8.1.0/ com.ibm.netcool_OMNIbus.doc_8.1.0/webtop/wip/task/web_adm_server_restart.html.

Error message displayed when dynamic dashboard is run The following error is displayed when you run a dynamic dashboard from the Operations Analytics - Log Analysis product: undefined not found in results data This error is a known defect in the Operations Analytics - Log Analysis product. To resolve, it close and then reopen the dynamic dashboard.

Error message displayed on "Show event dashboard by node" tool from event lists An error message is displayed when you start the Show event dashboard by node tool from an event list. This error is caused by incompatibility between the version of the Insight Pack and the and the version of the Operations Analytics - Log Analysis product. Ensure that the versions are compatible. See #unique_20/unique_20_Connect_42_requiredproducts. For more information about checking which version of the Insight Pack is installed, see “Checking the version of the Insight Pack” on page 317.

Chart display in Operations Analytics - Log Analysis changes without warning The sequence in which charts are displayed on the Operations Analytics - Log Analysis GUI can changes intermittently. This problem is a known defect in the Operations Analytics - Log Analysis product and has no workaround or solution.

addIndex script error: Unexpected ant version When you are running the addIndex script to create or update a data source type, the script fails with an error that contains text similar to the following: installation does not contain expected ant version.

./addIndex.sh -i Prompt: installation does not contain expected ant version

This error could be caused by any of the following issues: • Operations Analytics - Log Analysis is not installed on the machine on which you are running the addIndex script. • Operations Analytics - Log Analysis is installed on the machine on which you are running the addIndex script, but the script does not support the version of Operations Analytics - Log Analysis. To resolve this run the addIndex script on a machine where Operations Analytics - Log Analysis is installed. Ensure that Operations Analytics - Log Analysis V1.3.5 is installed.

680 IBM Netcool Operations Insight: Integration Guide addIndex script error: Duplicate fields in template file When you are running the addIndex script to create or update a data source type, the script fails with an error that contains text similar to the following: filesets.json does not exist.

./addIndex.sh -i .... generatebasedsv: [exec] Duplicate field name: field_name .... BUILD FAILED filepath/addIndex.xml:68: Replace: source file filepath/ data_source_type_nameInsightPack_v1.3.1.0/metadata/filesets.json does not exist

Where: • field_name is the name of the duplicate field name. • filepath is the system-dependent path to the script. • data_source_type_name is the name of your custom data source type. The omnibus1100_template.properties template file is used to define the fields in the data source type. This error is caused by definition of duplicate fields in the omnibus1100_template.properties file. To resolve this, edit the omnibus1100_template.properties file and remove any duplicate fields. Then rerun the addIndex script.

Chapter 9. Troubleshooting Netcool Operations Insight 681 682 IBM Netcool Operations Insight: Integration Guide Chapter 10. Reference

Reference information for Netcool Operations Insight.

Accessibility features for Netcool Operations Insight Accessibility features help users who have a disability, such as restricted mobility or limited vision, to use information technology products successfully. The following Netcool Operations Insight components have accessibility features. • Jazz for Service Management • Netcool/OMNIbus Web GUI • Netcool/Impact • IBM Tivoli Network Manager See the relevant product Knowledge Centers for more detailed information on accessibility for each component.

Related information Jazz for Service Knowledge CenterClick here and search for "accessibility" to retrieve more details on accessibility in Jazz for Service Management. Netcool/OMNIbus WebGUI Knowledge Center: accessibilityClick here for more details on accessibility in Netcool/OMNIbus Web GUI. Network Manager Knowledge Center: accessibilityClick here for more details on accessibility in Network Manager. Netcool/Impact Knowledge CenterClick here and search for "accessibility" to retrieve more details on accessibility in Netcool/Impact.

API reference IBM Netcool Operations Insight on Red Hat OpenShift supports integration with other software and tools with APIs. The following APIs are available: • Events (IBM Tivoli Netcool/OMNIbus APIs): https://www.ibm.com/support/knowledgecenter/en/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/api/reference/ omn_api_http_httpinterface.html • Topology: https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/Reference/ r_asm_ckbk_restapi.html and https://www.ibm.com/support/knowledgecenter/SS9LQB_1.1.8/ Reference/r_asm_swaggerdefaults.html • Runbooks: “Runbook Automation API” on page 683 For more information, see Using the REST API.

Runbook Automation API IBM Runbook Automation supports integration with other software and tools via the external Runbook Automation HTTP REST Interface. This section contains information for the Runbook Automation Component that is embedded in the IBM Netcool Operations Insight on Red Hat OpenShift installation. For more information about Runbook Automation as part of cloud event management on a public cloud, or a Runbook Automation Private Deployment, see the Runbook Automation Knowledge Center: https://www.ibm.com/support/ knowledgecenter/SSZQDR/com.ibm.rba.doc/API_rba.html

© Copyright IBM Corp. 2020, 2020 683 The full interface with the complete list of available API endpoints can be accessed at: https://rba.us- south.cloudeventmanagement.cloud.ibm.com/apidocs-v1. The interface is documented with Swagger (http://swagger.io) and the Swagger UI (https://swagger.io/ swagger-ui/).

Working with the Swagger UI The Swagger UI allows you to try out the API calls immediately once a user is logged in and authorized. Complete the following steps to authorize at the Swagger UI: 1. Create an API key as described in “API Keys” on page 688. 2. Make a note of the generated API name and password. 3. From the Swagger header section, click Authorize and enter the API key name and API key password in the login dialog and then click Authorize. Using the Swagger UI for API calls does not work for an RBA Private Deployment as the Swagger UI connects to an internet address, not your local environment. For a Private Deployment use a HTTP REST client or a tool such as cURL to test your calls.

Working with cURL If you do not want to use the Swagger UI, or want to script from a shell script, cURL is a good (but not the only) option to access the Runbook Automation API. The following items need to be considered when using cURL: • Runbook Automation does not support charsets other than UTF-8. If a header is specified defining another charset, the call will fail. This is an example of a failing cURL command:

curl (…) --header 'Content-Type: application/json; charset=usascii'

• Runbook Automation does not support the HTTP header "Expect: 100-continue". If the input you provide is larger than 1024 Bytes, some versions of cURL send this header by default and you must turn it off. To do this, add the following header to your cURL command: --header 'Expect:'.

Obtaining the base URL of the RBA API RBA as part of Netcool Operations Insight on OpenShift The base URL is https:///api/v1/rba. Note: In Cloud Event Management deployments, use the API keys for Cloud Event Management to access the RBA API.

Creating or modifying a runbook with the API This section contains instructions for creating and modifying runbooks with special runbook elements, such as parameters, automations, commands, and goto elements.

About this task You must send a JSON document to the API endpoint using the POST API: POST /api/v1/rba/ runbooks. The document contains the definition of the runbook, as per the input model defined in the Swagger API documentation. For more information, see “Runbook Automation API” on page 683. Use the following points to create the runbook elements within the description of a step.

Procedure • Create a parameter

684 IBM Netcool Operations Insight: Integration Guide To create a parameter, include it in the parameters section of your JSON input. Use the following syntax to use it inside the runbook description:

$PARAMETERNAME

Where $PARAMETERNAME is the name of the parameter chosen in the parameters section of the input JSON. The following JSON shows an example of a runbook defining one step with a mandatory parameter:

{ "name": "An example runbook", "steps": [ { "number": 1, "description": "Solve the task to solve the problem in data center dataCentre." } ], "parameters": [ { "name": "dataCentre", "description": "The data centre in which the operation needs to be done" } ] }

• Insert an automation To insert an automation, define the parameter mapping in the automationMappings section of your JSON input. Use the following syntax to apply it inside the runbook description:

$AUTOMATIONNAME

Where $AUTOMATIONID is the ID of the Automation Definition, $MAPPINGID is the number of the mapping defined in the automationMappings section of the JSON input, and $AUTOMATIONNAME is the name of the automation. The following JSON shows an example of a runbook defining one step containing an automation and a matching automation mapping. The automation mapping object maps the runbook parameter hostname to the automation parameter TARGET and the runbook parameter serviceName to the automation parameter SERVICE:

{ "name": "An example runbook with an automation", "steps": [ { "number": 1, "description": "To restart the service, execute the following automation: Restart Service." } ], "automationMappings" : [ { "mappingId" : "1", "parameterMappings" : [ { "automationParameterName" : "TARGET", "parameterMappingType" : "parameter", "parameterValue" : "hostname" },

{ "automationParameterName" : "SERVICE", "parameterMappingType" : "parameter", "parameterValue" : "serviceName" } ] }

] }

Chapter 10. Reference 685 • Insert a command Use the following syntax to mark a section of your step description as a command:

$COMMAND

Where $COMMAND is the text you want to be displayed and executed as a command. The following JSON shows an example of a runbook defining one step with a command:

{ "name": "An example runbook with a command", "steps": [ { "number": 1, "description": "Log in to the system and display the running processes with the following command: ps -ef." } ] }

• Insert a GoTo element Use the following syntax to insert a GoTo element into a runbook:

$TEXT

Where $NUMBER is the step number that the GoTo element is directing the user to. Use -1 to go to the end of the runbook. And $TEXT is the text to appear inside the element. The following JSON shows an example of a runbook defining multiple steps, with a GoTo to step 3 within the first step:

{ "name": "An example runbook with a GoTo", "steps": [ { "number": 1, "description": "If you are running on a system with SELinux disabled, directly Step 3." }, { "number": 2, "description": "Disable SELinux." }, { "number": 3, "description": "Issue a command that should prevented by SELinux and make sure it executed okay." } ] }

• Insert a collapsible section Use the following syntax to insert a collapsible section with a title into a runbook:

$TITLE

\n\n
\n

$COLLAPSIBLECONTENT

\n

Where $TITLE is the text to always be displayed and $COLLAPSIBLECONTENT is the content only visible when expanded. The following JSON shows an example of a runbook with a collapsible section in step 1. The title contains the text Show Details and the text contains additional information:

{ "name": "An example runbook with a collapsible section", "steps" : [ { "number" : 1, "description" : "

The system name is server01.mydomain.com

Show details:

\n\n
\n

The system is has a 4 core CPU and 8 GiB RAM.

\n
" } ] }

686 IBM Netcool Operations Insight: Integration Guide • Changing a runbook To change an existing runbook instead of creating a new one, use the API endpoint PATCH /api/v1/rba/runbooks/$RUNBOOKID. Where $RUNBOOKID is the runbook ID of the runbook that you want to change. The JSON sent will update the sections present in the JSON and leave the sections not present untouched. The following JSON is sent to /api/v1/rba/runbooks/ abcdefabcdefabcdefabcdefabcdefab:

{ "description": "This runbook has been updated with a new description: Use this runbook to resolve issues around the error code \"ARBR108E\"." }

This action will only change the description of the runbook and leave all other properties, for example name, steps, parameters, and tags unchanged.

API usage scenario The following example demonstrates how the Runbook Automation API can be applied in a more complex workflow scenario. In this scenario, a user wants to integrate an external program with IBM Runbook Automation. The external program should scan the runbook repository and add a new step to all runbooks where the runbook contains a tag with the content addstep. Additionally, the external program should only consider runbooks that have been published. 1. Get a list of all available runbooks The application calls the API endpoint GET /api/v1/rba/runbooks to retrieve a list of all runbooks. Looking at the complete response model of the runbook, it becomes clear that the program must only evaluate the fields runbookId, steps and tags. Steps and tags are on the root level of the document, but runbookId is part of the readOnly section. As only published versions are relevant, the query parameter "version" is set to "approved". The complete call, as provided by the Swagger UI, is:

curl -X GET --header 'Accept: application/json' 'https://rba.mybluemix.net/api/v1/rba/ runbooks?version=approved&fields=readOnly%2Csteps%2Ctags'

The response is a JSON array with each entry containing the properties readOnly, steps and tags.

2. Output processing by external program The user writes logic to the external program in order to filter the output. All entries of the JSON array where the tag array does not contain an entry with addstep are discarded. Of the remaining entries, the runbookIds are stored together with the steps in a data structure, as per the following JSON format:

[ { "runbookId" : "$runbookId", "steps" : [ … ] }, (…) ]

Within the steps section, the external program changes the content by adding a new step at the end that contains the static content required by the user. For example, an existing runbook with the following steps:

steps : [ { "number" : 1, "description" : "Log in to the system and install the latest updates provided by the OS vendor." } ]

Would be extended to:

steps : [ {

Chapter 10. Reference 687 "number" : 1, "description" : "Log in to the system and install the latest updates provided by the OS vendor." }, { "number" : 2, "description" : "To conclude this runbook, reboot the system." } ]

3. Change the runbooks The user creates a for-each loop in the external program that iterates over the data structure created in step 2. For each entry, the external program calls the API endpoint PATCH /api/v1/rba/ runbooks/:runbookId (where :runbookId is the ID of the runbook from the data structure shown above). The steps section of the prepared data is sent to the body. An example call for a runbook with the ID 7ef63332-5a3e-40f7-a923-af3ffb6795d3 and a steps section as described above would look as follows:

curl -X PATCH --header 'Content-Type: application/json' --header 'Accept: application/json' - d ' \ steps : [ \ { \ "number" : 1, \ "description" : "Log in to the system and install the latest updates provided by the OS vendor." \ }, \ { \ "number" : 2, \ "description" : "To conclude this runbook, reboot the system." \ } \ ] \ ' 'https://rba.mybluemix.net/api/v1/rba/ runbooks/7ef63332-5a3e-40f7-a923-af3ffb6795d3'

Summary By using the Runbook Automation API in an external program, you can automatically and systematically change a large number of runbooks based on the data already present in the system.

API Keys Use API keys to import or export runbooks, or other actions by using the HTTP API of Runbook Automation. API keys are used to authenticate your HTTP API requests. Each API key is associated with a particular user. Requests that are authenticated with an API key are authorized with the same permissions as the owner of that API key. The owner of an API key is the user who created it. Create an API key Create an API key to authenticate to the Runbook Automation API: 1. Log in to Netcool Operations Insight using the noi_lead role. The noi_lead role provides the access rights to create API keys. 2. Click Administration > Integration with other systems > API Keys. 3. Click Generate API key. 4. Enter a Description. You can create multiple API keys. The description helps you to distinguish between them. 5. Ensure that you have the Runbooks API checkbox selected. 6. Click Generate. The API key name and API key password are generated and displayed on screen. 7. Copy the API key name and password and save them for your reference. For security reasons, the API key password is not displayed in API Keys. If you loose your API key password you must create a new API key. Delete any unused API key. Delete an API Key

On the API Keys page, click Delete to delete the API Key. You can create and delete API keys with the noi_lead role. For more information about roles, see “Administering users” on page 353.

688 IBM Netcool Operations Insight: Integration Guide Netcool Operations Insight audit log files Records of user activity and report history are contained in the Netcool Operations Insight audit log files. The log files can be found at the following locations: Audit log $IMPACT_HOME/logs/NCI_NOI_Audit.log

The audit log is a record of all user interactions with Netcool Operations Insight. Report history log $IMPACT_HOME/logs/NCI_NOI_Report_History.log

The report history log records the following data for executed Event Analytics reports: • Run ID • Date • Report Name • Report Type • Status • Start Date • End Date • Duration • Number of Events • Seasonal Events • Seasonality Related Events Count • Related Events • Related Events Groups • Related Events Group Size • Suggested Patterns • Filter • Additional Related Events Filter Sample report history log:

"RunId_1519402261","2018-02-23 16:37:43.000","Seasonal_and_Related_events","Combined","COMPLETED","2017-07-01 00:00:00.0","2017-07-04 23:59:59.0","00:08:15","459418","3141","193731","244","3569","93","96","38","10","7","0","", "((AlertGroup != 'Synthetic Event - Parent') OR AlertGroup IS NULL)" "RunId_1519405376","2018-02-23 17:30:53.000","Related_Events_1","RelatedEvents","COMPLETED","2017-07-01 00:00:00.0","2017-07-04 23:59:59.0","","459418","0","0","244","3569","93","96","38","10","7","0","","((AlertGroup ! = 'Synthetic Event - Parent') OR AlertGroup IS NULL)" "RunId_1523626088","2018-04-13 09:30:56.000","Seasonallity_only","Seasonality","COMPLETED","2015-02-01 00:00:00.0","2015-03-01 23:59:59.0","00:02:09","116961","3540","0","0","0","0","0","0","0","0","0","",""

Chapter 10. Reference 689 Config map reference This section lists the pods that have configmaps and explains which parameters you can configure in each configmap.

Primary Netcool/OMNIbus ObjectServer configmap This topic explains the structure of the configmap for the primary IBM Tivoli Netcool/OMNIbus ObjectServer pod, ncoprimary, and lists the data elements that can be configured with this configmap. Edit this configmap to customize, and add custom automations and triggers to, the primary Netcool/ OMNIbus ObjectServer.

Contents The following table lists the data elements that are contained in the primary Netcool/OMNIbus ObjectServer configmap:

Table 160. Data elements in the primary Netcool/OMNIbus ObjectServer configmap Data elements Description More information agg-p-props-append Properties that are specified in Netcool/OMNIbus V8.1 this data element are appended documentation: Using the to the end of the Netcool/ ObjectServer properties and OMNIbus ObjectServer command-line options properties file on pod restart. agg-p-sql-extensions Use this element to add a new Netcool/OMNIbus V8.1 SQL extension, such as a trigger documentation: ObjectServer or an automation, to the Netcool/ SQL OMNIbus ObjectServer on pod restart.

Examples of each of the data elements in this configmap are provided.

Data element: agg-p-props-append The following data element appends a MessageLevel: 'debug' property to the .props file of the Primary ObjectServer.

agg-p-props-append: | MessageLevel: 'debug'

Data element: agg-p-sql-extensions The following data element adds a custom database that contains a single table to the primary Netcool/ OMNIbus ObjectServer.

agg-p-sql-extensions: | -- create a custom db create database mydb; go

-- create a custom table create table mydb.mytable persistent ( col1 incr primary key, col2 varchar(255) ); go

690 IBM Netcool Operations Insight: Integration Guide Backup Netcool/OMNIbus ObjectServer configmap Learn about the structure of the configmap for the backup IBM Tivoli Netcool/OMNIbus ObjectServer pod, ncobackup-agg-b. Edit this configmap to customize, and add custom automations and triggers to, the backup Netcool/OMNIbus ObjectServer, and to customize the operation of the bidirectional gateway that connects the primary and backup Netcool/OMNIbus ObjectServers.

Contents The following table lists the data elements that are contained in the backup Netcool/OMNIbus ObjectServer configmap:

Table 161. Data elements in the backup Netcool/OMNIbus ObjectServer configmap Data elements Description More information

agg-b-props-append Use this element to append a Netcool/OMNIbus V8.1 new property to the end of the documentation: Using the Netcool/OMNIbus ObjectServer ObjectServer properties and properties file on pod restart. command-line options

agg-b-sql-extensions Use this element to add a new Netcool/OMNIbus V8.1 SQL extension, such as a trigger documentation: ObjectServer or an automation, to the Netcool/ SQL OMNIbus ObjectServer on pod restart.

agg-gate-map-replace The gateway map definition in Netcool/OMNIbus V8.1 this element replaces the documentation: Failover definition in the pod configuration (AGG_GATE.map) on pod restart.

agg-gate-props-append Properties that are listed in this Netcool/OMNIbus V8.1 element are appended to the documentation: Common gateway properties file prior on gateway properties and pod restart. command-line options

agg-gate-startup-cmd- The gateway startup command Netcool/OMNIbus V8.1 replace definition in this element documentation: Startup replaces the definition in the pod command file (AGG_GATE.startup.cmd) on pod restart.

agg-gate-tblrep-def- The gateway table replication Netcool/OMNIbus V8.1 replace definition in this element documentation: Table replication replaces the definition in the pod definition file (AGG_GATE.tblrep.def) on pod restart.

Examples of each of the data elements in this configmap are provided.

Data element: agg-b-props-append The following data element appends a MessageLevel: 'debug' property to the .props file of the backup Netcool/OMNIbus ObjectServer.

agg-b-props-append: | MessageLevel: 'debug'

Chapter 10. Reference 691 Data element: agg-b-sql-extensions The following data element adds a custom database containing a single table to the backup Netcool/ OMNIbus ObjectServer.

agg-b-sql-extensions: | -- create a custom db create database mydb; go

-- create a custom table create table mydb.mytable persistent ( col1 incr primary key, col2 varchar(255) ); go

Data element: agg-gate-map-replace The following data element replaces the definition in the pod (AGG_GATE.map).

agg-gate-map-replace: | # My test map CREATE MAPPING StatusMap ( 'Identifier' = '@Identifier' ON INSERT ONLY, 'Node' = '@Node' ON INSERT ONLY );

Data element: agg-gate-props-append The following data element is appended to the gateway properties file before startup.

agg-gate-props-append: | MessageLevel: 'debug' agg-gate-startup-cmd-replace: | SHOW PROPS;

Data element: agg-gate-startup-cmd-replace The gateway startup command definition in this data element replaces the definition in the pod (AGG_GATE.startup.cmd).

agg-gate-startup-cmd-replace: | SHOW PROPS;

Data element: agg-gate-tblrep-def-replace The Gateway table replication definition in this data element replaces the definition in the pod (AGG_GATE.tblrep.def).

agg-gate-tblrep-def-replace: | # My test table replication REPLICATE ALL FROM TABLE 'alerts.status' USING MAP 'StatusMap';

Netcool/Impact core server configmap Learn about the structure of the configmap for the Netcool/Impact core server pod, nciserver. Edit this configmap to customize the properties, logging features, and memory status monitoring for the primary

692 IBM Netcool Operations Insight: Integration Guide and backup Netcool/Impact core server pods. You can also customize the Derby database that runs inside the Netcool/Impact server pod.

Contents The following table lists the data elements that are contained in the Primary Netcool/Impact core server configmap:

Table 162. Data elements in the Primary Netcool/Impact core server configmap Data elements Description More information

-nciserver- Files that are listed in this data For more information, see: external-cacerts element are used to update the “Enabling SSL communications trust.jks file on pod restart. from Netcool/Impact on Where is the OpenShift” on page 193 release name of your cloud deployment.

impactcore-server-props- Properties that are listed in this To find information on the update data element are used to update parameters that can be the NCI_*_server.props file configured by using the on pod restart. NCI_server.props file, go to the Netcool/Impact documentation Welcome page and search for "NCI_server.props". Scroll down to the properties that are of interest to you.

impactcore-log4j-props- Properties that are listed in this Netcool/Impact documentation: update data element are used to update Log4j properties files the impactserver- log4j.properties file on pod restart.

impactcore-jvm-options- Properties that are listed in this Netcool/Impact documentation: replace data element are used to replace Memory status monitoring the properties in the jvm.options file on pod restart.

impactcore-derby-sql- The SQL recorded in this data Netcool/Impact documentation: extension element is applied to the Managing the database server Netcool/Impact Derby database on pod restart.

Examples of each of the data elements in this configmap are provided.

Data element: -nciserver-external-cacerts Files that are listed in this data element are used to update the trust.jks file on pod restart.

-nciserver-external-cacerts: | nciserver.importNCICACerts.enabled: false

Where is the release name of your cloud deployment.

Chapter 10. Reference 693 Data element: impactcore-server-props-update Properties that are listed in this data element are used to update the NCI_*_server.props file on pod restart.

impactcore-server-props-update: | impact.server.timeout=123456 impact.servicemanager.storelogs=false # the following should just be appended fred=banana

Data element: impactcore-log4j-props-update Properties that are listed in this data element are used to update the impactserver- log4j.properties file on pod restart.

impactcore-log4j-props-update: | log4j.rootCategory=DEBUG

log4j.appender.NETCOOL=org.apache.log4j.RollingFileAppender log4j.appender.NETCOOL.threshold=DEBUG log4j.appender.NETCOOL.layout=org.apache.log4j.PatternLayout log4j.appender.NETCOOL.layout.ConversionPattern=%d{DATE} %-5pC3PO [%c{1}] %m%n log4j.appender.NETCOOL.append=false log4j.appender.NETCOOL.file=/opt/IBM/tivoli/impact/logs/impactserver.log log4j.appender.NETCOOL.bufferedIO=true log4j.appender.NETCOOL.maxBackupIndex=4 log4j.appender.NETCOOL.maxFileSize=21MB

Data element: impactcore-jvm-options-replace Properties that are listed in this data element are used to replace the properties in the jvm.options file on pod restart.

impactcore-jvm-options-replace: | -Xms512M -Xmx4096Mdd -Dclient.encoding.override=UTF-8 -Dhttps.protocols=SSL_TLSv2 #-Xgc:classUnloadingKickoffThreshold=100 -Dcom.ibm.jsse2.overrideDefaultTLS=true

Data element: impactcore-derby-sql-extension The SQL recorded in this data element is applied to the Netcool/Impact Derby database on pod restart.

impactcore-derby-sql-extensions: | CREATE SCHEMA MYSCHEMA; SET SCHEMA MYSCHEMA;

CREATE TABLE MYTABLE ( keyvalue character varying (256), value character varying (256) ); INSERT INTO MYTABLE VALUES ('mykey1', 'myvalue1');

Netcool/Impact GUI server configmap Learn about the structure of the configmap for the IBM Tivoli Netcool/Impact GUI server pod, impactgui. Edit this configmap to customize the logging features of the Netcool/Impact GUI server, and to customize the Derby database that runs inside the Netcool/Impact server.

Contents The following table lists the data elements that are contained in the Netcool/Impact GUI server configmap:

694 IBM Netcool Operations Insight: Integration Guide Table 163. Data elements in the Netcool/Impact GUI server configmap Data elements Description More information

server-props-update Properties that are listed in this To find information on the data element are used to update parameters that can be the server.props file on pod configured by using the restart. server.props file, go to the Netcool/Impact documentation Welcome page and search for "server.props" . Scroll down to the properties that are of interest to you.

impactcore-log4j-props- Properties that are listed in this Netcool/Impact documentation: update data element are used to update Log4j properties files the impactserver- log4j.properties file on pod restart.

Examples of each of the data elements in this configmap are provided.

Data element: server-props-update Properties that are listed in this data element are used to update the server.props file on pod restart.

server-props-update: | impact.cluster.network.call.timeout=60

Data element: impactcore-log4j-props-update Properties that are listed in this data element are used to update the impactserver- log4j.properties file on pod restart.

impactcore-log4j-props-update: | log4j.rootCategory=DEBUG

Proxy configmap Edit this configmap to configure parameters such as connection timeouts and to enable or disable transport layer security (TLS) encryption.

Contents The proxy is configured with the {{ .Release.Name }}-proxy-config configmap, where {{ .Release.Name }} is the unique name that is assigned to the deployment, for example, noi. The configmap is used to configure parameters such as connection timeouts and to enable or disable TLS encryption, as in the following example:

connectionTimeoutMs: "900000" externalHostOrIP: mycluster.icp revision: "1" routes: | [{"Port":6001, "Service": "{{ .Release.Name }}-objserv-agg-primary:4100"}, {"Port":6002, "Service": "{{ .Release.Name }}-objserv-agg-backup:4100"} ] tlsEnabled: "true" tlsHandshakeTimeoutMs: "2000"

Where mycluster.icp is the public domain name of the cluster. The following table lists the data elements that are contained in the proxy configmap:

Chapter 10. Reference 695 Table 164. Data elements in the proxy configmap Data elements Description

connectionTimeoutMs Use this element to specify the connection timeout in milliseconds.

externalHostOrIP Use this element to specify the external Host or IP address. After deployment, do not edit this value.

revision Version number of configmap. Must be incremented when a change is made to the configmap, or the change will not be active. Must be an integer greater than 0.

routes Use this element to specify the gateway routes. After deployment, do not edit this value.

tlsEnabled Use this element to enable or disable TLS encryption.

tlsHandshakeTimeoutMs Use this element to specify the TLS handshake timeout in milliseconds.

After you update the configmap the changes are automatically applied to the existing proxy pod.

LDAP Proxy configmap ldap_proxy_configmap is the configmap for the LDAP proxy pod, openldap. Edit this configmap to configure connections to your own LDAP server when you have LDAPmode set to proxy rather than standalone. This configmap is not used when LDAPmode is set to standalone.

Contents The following table lists the data elements that are contained in the openldap configmap:

Table 165. Data elements in the openldap configmap Data elements Description More information

ldap-proxy-slapd-replace: Replaces the contents of the slapd.conf file, which configures the connection to your LDAP server.

Dashboard Application Services Hub configmap Learn about the structure of the configmap for the Dashboard Application Services Hub pod, webgui. Edit this configmap to customize the properties of the Web GUI Event Viewer.

Contents The following table lists the data elements that are contained in the Dashboard Application Services Hub configmap :

696 IBM Netcool Operations Insight: Integration Guide Table 166. Data elements in the Dashboard Application Services Hub configmap Data elements Description More information

server-init-update Properties that are listed in this Netcool/OMNIbus data element are used to documentation: server.init overwrite the environmental and properties server session properties of the Web GUI server that are stored in the server.init initialization file.

Examples of each of the data elements in this configmap are provided.

Data element: server-init-update Properties that are listed in this data element are used to overwrite the environmental and server session properties of the Web GUI server that are stored in the server.init initialization file.

server-init-update: | eventviewer.pagesize.max:20000

columngrouping.allowedcolumns=Acknowledged,AlertGroup,Class,Customer,Location,Node,NodeAlias,Nmo sCauseType,NmosManagedStatus,Severity,Service columngrouping.maximum.columns:3

alerts.status.sort.displayvalue=Acknowledged,Class,ExpireTime,Flash,NmosCauseType,NmosManagedSta tus,OwnerGID,OwnerUID,SupressEscl,TaskList,Type,X733EventType,X733ProbableCause dashboard.edit.render.mode:applet dashboard.render.mode:applet webtop.keepalive.interval:3 datasource.failback.delay:120 users.global.filter.mode:1 users.global.view.mode:1 users.group.filter.mode:1

Gateway for Message Bus configmap Learn about the structure of the configmap for the Gateway for Message Bus. This configmap is associated with the scala pod. Edit this configmap to customize the properties of the Gateway for Message Bus, which defines which data is transferred from the Netcool/OMNIbusObjectServer to the Operations Analytics - Log Analysis to support the Event Search capability.

Contents The following table lists the data elements that are contained in the Gateway for Message Bus configmap :

Table 167. Data elements in the Gateway for Message Bus configmap Data elements Description More information

xml-gate-props-append Properties that are listed in this Netcool/OMNIbus data element are appended to documentation: Gateway for XML gateway properties file on Message Bus properties file pod startup.

xml-gate-map-replace Properties that are listed in this Netcool/OMNIbus data element replace the documentation: Gateway for gateway map definition in the Message Bus map definition file LA_GATE.map file on pod startup.

Chapter 10. Reference 697 Table 167. Data elements in the Gateway for Message Bus configmap (continued) Data elements Description More information

xml-gate-tblrep-def- Properties that are listed in this Netcool/OMNIbus replace data element replace the table documentation: Gateway for replication in the Message Bus table replication LA_GATE.tblrep.def file on definition file pod startup.

xml-gate-startup-cmd- Properties that are listed in this Netcool/OMNIbus replace data element replace the startup documentation: Gateway for command definition in the Message Bus startup command LA_GATE.startup.cmd file on file pod startup.

Examples of each of the data elements in this configmap are provided.

Data element: xml-gate-props-append Properties that are listed in this data element are appended to XML gateway properties file on pod startup.

xml-gate-props-append: | MessageLevel: 'debug'

Data element: xml-gate-map-replace Properties that are listed in this data element replace the gateway map definition in the LA_GATE.map file on pod startup.

xml-gate-map-replace: |CREATE LOOKUP SeverityLkTable ( { 0, 'Clear' }, { 1, 'Indeterminate' }, { 2, 'Warning' }, { 3, 'Minor' }, { 4, 'Major' }, { 5, 'Critical' } ) DEFAULT = TO_STRING('@Severity');

CREATE LOOKUP TypeLkTable ( { 0, 'Type Not Set' }, { 1, 'Problem' }, { 2, 'Resolution' }, { 3, 'Visionary Problem' }, { 4, 'Visionary Resolution' }, { 7, 'ISM New Alarm' }, { 8, 'ISM Old Alarm' }, { 11, 'More Severe' }, { 12, 'Less Severe' }, { 13, 'Information' } ) DEFAULT = TO_STRING('@Type');

CREATE LOOKUP ClassLkTable ( { 0, 'Default Class' }, { 95, 'Fujitsu FLEXR+' } ) DEFAULT = TO_STRING('@Class');

# My test map CREATE MAPPING StatusMap ( 'LastOccurrence' = '@LastOccurrence', 'Summary' = '@Summary', 'NmosObjInst' = '@NmosObjInst', 'Node' = '@Node', 'NodeAlias' = '@NodeAlias' NOTNULL '@Node', 'LastOccurrence' = '@LastOccurrence', 'Severity' = Lookup('@Severity', 'SeverityLkTable'),

698 IBM Netcool Operations Insight: Integration Guide 'AlertGroup' = '@AlertGroup', 'AlertKey' = '@AlertKey', 'Identifier' = '@Identifier', 'Location' = '@Location', 'Type' = Lookup('@Type', 'TypeLkTable'), 'Tally' = '@Tally', 'Class' = Lookup('@Class', 'ClassLkTable'), 'OmniText' = '@Manager' + ' ' + '@Agent', 'ActionCode' = ACTION_CODE, 'ServerName' = '@ServerName', 'ServerSerial' = '@ServerSerial' );

Data element: xml-gate-tblrep-def-replace Properties that are listed in this data element replace the table replication in the LA_GATE.tblrep.def file on pod startup.

xml-gate-tblrep-def-replace: | # My test table replication definition REPLICATE FT_INSERT,FT_UPDATE FROM TABLE 'alerts.status' USING MAP 'StatusMap';

Data element: xml-gate-startup-cmd-replace Properties that are listed in this data element replace the startup command definition in the LA_GATE.startup.cmd file on pod startup.

xml-gate-startup-cmd-replace: | SHOW PROPS;

Configuration share configmap The configmap for the configuration share configures pod to pod file sharing, and should not be edited.

Cassandra configmap The cassandra configmap is optionally used by Cassandra to ensure that nodes are started consecutively. If node startup fails, you can edit the configmap to start another node.

Contents The following table lists the data elements that are contained in the cassandra configmap:

Table 168. Data elements in the cassandra configmap Data elements Description More information

bootstrapping.node The name of the node that is https://github.ibm.com/hdm/ bootstrapping (starting), or the common-cassandra/wiki/seed- last node to start. options

Chapter 10. Reference 699 ASM-UI configmap This configmap has the configuration for the IBM Agile Service Manager user interface. It should not be edited. cloud native analytics gateway configmap The ea-noi-layer-eanoigateway configmap is created during the install and should not be edited.

CouchDB configmap The couchdb configmap is created during the install and should not be edited.

Kafka configmap The kafka configmap is created during the install and should not be edited.

Zookeeper configmap The zookeeper configmap is created during the install and should not be edited.

Event reference Use this information to understand the Netcool Operations Insight event structure. The Netcool Operations Insight event structure is a flat structure made up of column data from the following sources: Netcool/OMNIbus ObjectServer alerts.status table, and other sources.

Column data from the ObjectServer The Netcool/OMNIbus ObjectServer alerts.status table contains status information about problems that have been detected by probes. For a description of each of the columns in the alerts.status table, see the link at the bottom of this topic.

Column data from other sources In addition to the column data from the Netcool/OMNIbus ObjectServer alerts.status table, Netcool Operations Insight events also contain column data from other sources.

Columns from cloud native analytics The following table describes the content of each of the columns. Note: Any references in these column descriptions to incidents, incident management, or tenants can be ignored during the time frame of the Netcool Operations Insight 1.6.1 release.

Table 169. Column name Data type Description Source AdvCorrCauseTy Varchar Field used by the Netcool Knowledge Netcool Knowledge pe Library. Library AdvCorrServerN Varchar Field used by the Netcool Knowledge Netcool Knowledge ame Library. Library AdvCorrServerS Integer Field used by the Netcool Knowledge Netcool Knowledge erial Library. Library AggregationFir Time Timestamp when event was first Netcool/OMNIbus st inserted at the aggregation layer. multitiered architecture

700 IBM Netcool Operations Insight: Integration Guide Table 169. (continued) Column name Data type Description Source AsmStatusId Varchar(64) Status of one or more resources The cloud native modeled within the topology analytics service management capability. This is a raw status ID that can be used within the topology or used in custom automation or trigger sets. CEAAsmStatusDe Varchar(4096) Details on the AsmStatusId, if there is The cloud native tails a corresponding status ID within the analytics service event or synthetic grouping parent event. This is used to provide time data that is used to display status in the Event Viewer. CEACorrelation Varchar(255) The correlating group key for this event. The cloud native Key The correlation key value links this event analytics service to a synthentic grouping parent event. This correlating group key could be for one or more correlating groups from different algorithms. CEACorrelation Varchar(4096) Details on each correlation group that The cloud native Details this event has been correlated to. The analytics service data stored here is a set of name-value pairs, where the name is the correlating group key to a JSON structure of group details. This column is intended for use by the UI. CEAEventClassi Varchar(255) Event type classification used to help The cloud native fication calculate the probable cause score. analytics service CEAEventScore Integer The non-normalised probable cause The cloud native score associated with an event. analytics service CEAIsSeasonal Integer An indicator of whether the event is The cloud native seasonal or not. analytics service CEASeasonalDet Varchar(4096) Details on the nature of these events The cloud native ails seasonality, if it is seasonal. analytics service CEMDeduplicati Varchar(64) Used as part of the integration of cloud The event onKey event management capabilities within management Netcool Operations Insight. service Deduplicates occurrences of the same event. CEMErrorCode Integer Used as part of the integration of cloud The event event management capabilities within management Netcool Operations Insight. Gateway service specific error code. CEMEventId Varchar(64) Used as part of the integration of cloud The event event management capabilities within management Netcool Operations Insight. Unique service identifier for the occurrence of this event.

Chapter 10. Reference 701 Table 169. (continued) Column name Data type Description Source CEMIncidentUUI Varchar(64) Used as part of the integration of cloud The event D event management capabilities within management Netcool Operations Insight. Identifier of service the incident that this event belongs to. CEMSubscriptio Varchar(64) Used as part of the integration of cloud The event nID event management capabilities within management Netcool Operations Insight. Identifies service the cloud event management subscription the event originates from. CauseType Varchar Field used by the scope-based grouping Scope-based algorithm. grouping algorithm CollectionFirs Time Timestamp when the event was inserted Netcool/OMNIbus t at the collection layer. multitiered architecture DisplayFirst Time Timestamp when event was first Netcool/OMNIbus inserted at the display layer. multitiered architecture EventData Varchar Incident management workflow column. The event Normalized JSON blob containing management overflow property data from the event. service IBMExtractedTy Varchar Field used by the Netcool Knowledge Scope-based pe Library. grouping algorithm IncidentKey Varchar Incident Management Workflow. The Incident incident key of an incident that this management event is part of. IncidentUuid Varchar Incident Management Workflow. Unique Incident UUID reference key to the event in management incident management workflow. IsClosed Integer Incident Management Workflow. Incident Additional state indication column to management reflect closure state of the event in the incident management workflow.

702 IBM Netcool Operations Insight: Integration Guide Table 169. (continued) Column name Data type Description Source

LastOccurrence Integer In the LastOccurrence field, the last Scope-based USec occurrence of an event is displayed grouping algorithm down to the latest second. The The topology LastOccurrenceUSec provides extra management precision by indicating any additional service microseconds associated with the last occurrence time. For example, assume the last occurrence time for an event is at 02:15:59:243 am, where the units are HH:MM:SS:mmm. In this case the LastOccurrence and LastOccurrenceUSec fields for this event contain the following values: LastOccurrence: 02:15:59 am LastOccurrenceUSec: 243000 milliseconds

LocalObjRelate Varchar Field used by the Netcool Knowledge Netcool Knowledge Library. Library LocalTertObj Varchar Field used by the Netcool Knowledge Netcool Knowledge Library. Library RemoteObjRelat Varchar Field used by the Netcool Knowledge Netcool Knowledge e Library. Library RemoteTertObj Varchar Field used by the Netcool Knowledge Netcool Knowledge Library. Library RunbookID Integer RunbookID of the runbook linked to this The runbook event via a Trigger automation service RunbookParamet Varchar Parameters to be sent to runbook The runbook ers execution in JSON format. This field is automation service populated by the rules of the Trigger. RunbookParamet Varchar Base64 encoding of the The runbook ersB64 "RunbookParameters" field. automation service RunbookStatus Varchar Status of the Runbook execution The runbook automation service RunbookURL Varchar URL to launch into the Runbook UI to The runbook execute the linked Runbook with the automation service RunbookID and the RunbookParameters pre-filled TenantId Varchar Incident Management Workflow. Stores Incident the id of the tenant that owns the alert in management the incident management workflow. Events have no tenant id until they become part of an incident. Tenant assignment is likely to be on all events at some point in the future.

Chapter 10. Reference 703 Insight packs Insight packs are used together with on-premises IBM Operations Analytics - Log Analysis to provide Event search and Topology search capabilities. You can find more information on the insight packs in the following sections of this documentation: • Installing the insight packs – Installing the Tivoli Netcool/OMNIbus Insight Pack – “Installing the Network Manager Insight Pack” on page 89 • Configuring the insight packs – Configuring the Tivoli Netcool/OMNIbus Insight Pack – Configuring the Network Manager Insight Pack All of the insight pack documentation can be found in the following PDF documents:

Link to Document title document Description IBM Operations Analytics - Log Click here to Documents the installation, operation, and Analysis: Netcool/OMNIbus download. customization options for the Insight Pack that Insight Pack README enables the Event Search integration between IBM Operations Analytics - Log Analysis and Netcool/ OMNIbus. IBM Operations Analytics - Log Click here to Documents the installation, operation, and Analysis: Network Manager download. customization options for the Insight Pack that Insight Pack README enables the topology search integration between IBM Operations Analytics - Log Analysis and Network Manager.

Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106-0032, Japan

704 IBM Netcool Operations Insight: Integration Guide The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation 958/NH04 IBM Centre, St Leonards 601 Pacific Hwy St Leonards, NSW, 2069 Australia

IBM Corporation 896471/H128B 76 Upper Ground London SE1 9PZ United Kingdom

IBM Corporation JBF1/SOM1 294 Route 100 Somers, NY, 10589-0100 United States of America Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

Chapter 10. Reference 705 All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. If you are viewing this information softcopy, the photographs and color illustrations may not appear.

Trademarks AIX, Db2, IBM, the IBM logo, ibm.com®, Informix, iSeries, Netcool, OS/390®, Passport Advantage, pSeries, Service Request Manage, System p, System z, Tivoli, the Tivoli logo, Tivoli Enterprise Console®, TotalStorage, WebSphere, xSeries, z/OS, and zSeries are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. Adobe, Acrobat, Portable Document Format (PDF), PostScript, and all Adobe-based trademarks are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.

706 IBM Netcool Operations Insight: Integration Guide Appendix A. Release notes

IBM Netcool Operations Insight V1.6.1 is available. Compatibility, installation, and other getting-started issues are addressed in these release notes.

Contents • “Description” on page 707 • “Compatibility” on page 707 • “System requirements” on page 707 • “New product features and functions in V1.6.1” on page 707 • “Known problems at eGA” on page 710 • “Support” on page 726

Description Netcool Operations Insight combines real-time event consolidation and correlation capabilities of Netcool Operations Insight with Event Search and Event Analytics. It further delivers seasonality analysis to help detect regularly occurring issues. Netcool Operations Insight also enables real-time enrichment and correlation to enable agile responses to alerts raised across disparate systems, including application topology.

Compatibility IBM Netcool Operations Insight includes the product and component versions listed in the following topic: “V1.6.1 Product and component version matrix” on page 9. This topic also includes information on the eAssemblies and fix packs required to download and install. For more information about the Netcool Operations Insight products and components, see “Products and components on premises” on page 5

System requirements For information about hardware and software compatibility of each component, and detailed system requirements, see the IBM Software Product Compatibility Reports website: http://www-969.ibm.com/software/reports/compatibility/clarity/index.html Tip: When you create a report, search for Netcool Operations Insight and select your version (for example, V1.4). In the report, additional useful information is available through hover help and additional links. For example, to check the compatibility with an operating system for each component, go to the Operating Systems tab, find the row for your operating system, and hover over the icon in the Components column. For more detailed information about restrictions, click the View link in the Details column.

New product features and functions in V1.6.1 Updated product versions in V1.6.1 The Netcool Operations Insight V1.6.1 solution includes features delivered by the fix packs and extensions of the products and versions listed in the following topic: “V1.6.1 Product and component version matrix” on page 9 The products are available for download from Passport Advantage and Fix Central.

© Copyright IBM Corp. 2020, 2020 707 For more information about the new features in these products and components, see the following topics:

What's new in... Link

Red Hat OpenShift https://access.redhat.com/documentation/en-us/ openshift_container_platform/4.4/html/release_notes/index

IBM Cloud Platform Common https://www.ibm.com/support/knowledgecenter/en/SSHKN6/ Services kc_welcome_cs.html

The cloud event management https://www.ibm.com/support/knowledgecenter/en/SSURRN/ service com.ibm.cem.doc/em_whatsnew.html

The runbook automation service https://www.ibm.com/support/knowledgecenter/SSZQDR/ com.ibm.rba.doc/GS_whatsnew.html

IBM Agile Service Manager https://www.ibm.com/support/knowledgecenter/en/ SS9LQB_1.1.8/ProductOverview/r_asm_whatsnew.html

IBM Tivoli Netcool/OMNIbus https://www.ibm.com/support/knowledgecenter/ SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/ omnibus/wip/install/reference/omn_prodovr_whatsnew.html

IBM Tivoli Netcool/Impact https://www.ibm.com/support/knowledgecenter/ SSSHYH_7.1.0/com.ibm.netcoolimpact.doc/whatsnew.html

IBM Operations Analytics - Log https://www.ibm.com/support/knowledgecenter/ Analysis SSPFMY_1.3.6/com.ibm.scala.doc/overview/ovr-whats_new.html

IBM Tivoli Network Manager https://www.ibm.com/support/knowledgecenter/ SSSHRK_4.2.0/overview/concept/ovr_whatsnew.html

IBM Tivoli Netcool Configuration https://www.ibm.com/support/knowledgecenter/ Manager SS7UH9_6.4.2/ncm/wip/common/reference/ ncm_ovr_whatsnew.html

IBM Network Performance https://www.ibm.com/support/knowledgecenter/ Insight SSCVHB_1.3.1/release_notes/ cnpi_release_notes.html#cnpi_release_notes__whatsnew

Deprecated features in V1.6.1 The following features and functions are deprecated in the Netcool Operations Insight V1.6.1 product: Netcool Operations Insight on IBM Cloud Private Support for deploying Netcool Operations Insight on IBM Cloud Private was removed in V1.6.1. IBM Operations Analytics - Log Analysis Support for deploying IBM Operations Analytics - Log Analysis with Netcool Operations Insight on IBM Cloud Private was removed in V1.6.1. IBM Operations Analytics - Log Analysis is still supported in an on-premises deployment of Netcool Operations Insight. New features in V1.6.1 The following features and functions are available in the Netcool Operations Insight V1.6.1 product: OpenShift Support for deploying Netcool Operations Insight on Red Hat OpenShift V4.3 and V4.4.3 was added in V1.6.1. Platform Support for deploying Netcool Operations Insight on Red Hat OpenShift on the following cloud platforms was added in V1.6.1: • Amazon Web Services (AWS)

708 IBM Netcool Operations Insight: Integration Guide • Google Cloud Platform • Microsoft Azure • RedHat OpenShift Kubernetes Service (ROKS) Operator installation Netcool Operations Insight on Red Hat OpenShift V1.6.1 is installed with operators. For more information, see “Installing Netcool Operations Insight on Red Hat OpenShift” on page 118. Probable cause You can use a new analytics capability that is designed to identify the event with the greatest probability of being the cause of the event group. For more information, see “Configuring probable cause” on page 207. Topology correlation You can create topology templates to generate defined topologies, which search your topology database for instances that match its conditions. For more information, see “Configuring topological correlation” on page 208. Temporal correlation enhancements Temporal correlation analytics were enhanced in this release to also detect patterns that are based on the identified temporal groups. Temporal pattern analysis identifies patterns of behavior among temporal groups, which are similar, but occur on different resources. Subsequent events, which match the pattern and occur on a new common resource, are grouped together. For more information, see “Configure temporal correlation” on page 207. Documentation changes in V1.6.1 The table of contents has been completely overhauled to make it more task oriented. There are also brand new sections to describe the content of the new Netcool Operations Insight user interface, such as the new Events page. Here the key changes and additions: • Expanded Configuring section, which now includes information on how to configure analytics, event source integrations, automation types for runbooks, and API keys. • Expanded Getting started section, which now includes information on how to access the new main Netcool Operations Insight user interface. • Expanded Administering section, which now includes information on how to administer the new Events page, analytics policies, and runbooks. • New Operations section containing all relevant operations tasks. For operations teams working with Cloud and hybrid systems, this section includes event monitoring tasks in the new Events page, as well as monitoring topology and operations efficiency dashboards. • The Configuring, Getting started, Administering, Operations sections have been divided into separate Cloud and hybrid systems, and On-premises systems sections, to enable you to easily find the tasks that you need. The following table provides a mapping from key sections of the previous version to the new location of that documentation in the V1.6.1 table of contents.

Table 170. Location of key documentation sections in previous version and the new location of that documentation in V1.6.1 Location in V1.6.0 documentation Location in V1.6.1 documentation Configuring > Configuring Operations Management Configuring > On-premises systems > Configuring > Configuring Event Search Operations Management > “Configuring Event Search” on page 301 Configuring > Configuring Network Management > Configuring > On-premises systems > Configuring Configuring the Network Manager Insight Pack Operations Management > Configuring Topology Search > “Configuring Topology Search ” on page 338

Appendix A. Release notes 709 Table 170. Location of key documentation sections in previous version and the new location of that documentation in V1.6.1 (continued) Location in V1.6.0 documentation Location in V1.6.1 documentation

Configuring > Configuring integration to IBM Configuring > On-premises systems > Configuring Connections Operations Management > “Configuring integration to IBM Connections” on page 339

Event search Configuring > On-premises systems > Configuring Operations Management > “Configuring Event Search” on page 301 Operations > On-premises systems > “Using Event Search” on page 499

Event Analytics Installing > Installing on premises > Installing on- premises > Installing Operations Management > “Installing Event Analytics” on page 53 Installing > Installing on premises > “Uninstalling Event Analytics” on page 99 Configuring > On-premises systems > Configuring Operations Management > “Configuring Event Analytics” on page 264 Administering > On-premises systems > “Administering Event Analytics” on page 404

Topology search Configuring > On-premises systems > Configuring Operations Management > “Configuring Topology Search” on page 332 Operations > On-premises systems > “Using Topology Search” on page 503

Networks for Operations Insight Operations > On-premises systems > “IBM Networks for Operations Insight” on page 505

Known problems at eGA The following problems with IBM Netcool Operations Insight were known at the time of eGA. • “Solution-level problems” on page 710 • IBM Netcool Operations Insight on OpenShift • “Device Dashboard” on page 719 • Event Analytics • “Event Search” on page 723 • “Globalization” on page 724 • “Network Health Dashboard” on page 725 • “Operations Analytics - Log Analysis” on page 726 Solution-level problems Missing topology on event-driven topology view When viewing topology from events, the topology at the time that the event occurred is shown. If topology changes occurred very close in time to the event, then those topology changes are sometimes not visible. To ensure that you see all the topological context you may need to select More information so that the Topology Viewer page opens with the timeline bar, and then slightly alter the timeline window.

710 IBM Netcool Operations Insight: Integration Guide Unable to upgrade connection when running load sample data This issue occurs on an OpenShift V4.3 deployment. When running the load sample data, falling back to logs is returned. This is a minor error and the load sample data works functionally anyways. Runbooks fail to execute after 1000 executions This issue occurs when, after setting up a trigger to apply to all events, the Runbook executions reach over 1000. The Runbooks are failing to execute after this amount of executions and errors appear in the logs. The users are not able to see their latest execution with the relevant Runbook and timestamp at the top of the execution table. Inactive topology management links Roles associated with topology management are available even if topology.enabled has been set to false in the operator properties file. If a user assigns a topology management role to a user when topology is not enabled, then the user will be able to see Topology and Topology Dashboard navigation links, but these will not be active. Unable to create a connection to an on-premises Impact instance in hybrid deployment Attempting to create an Impact connection with the on premises DASH UI causes 'Path specified has an invalid formatting.' to be displayed. If you need to configure a new connection then you must do so using the following procedure: 1. Use the command line to configure the required connection.

JazzSM_path/ui/bin/restcli.sh putProvider -username smadmin -password password - provider "Impact_NCICLUSTER.host" -file input.txt

where • JazzSM_path is the name of the Jazz for Service Management installation, usually /opt/IBM/JazzSM. • password is the password for the smadmin administrative user • host is the Impact server, for example test1.fyre.ibm.com • input.txt has content similar to the following (where host is the Impact server, for example test1.fyre.ibm.com)

{ "authUser": "impactuser", "authPassword": "netcool", "baseUrl": "https:\/\/test1.fyre.ibm.com:17311\/ibm\/tivoli\/rest", "datasetsUri": "\/providers\/Impact_NCICLUSTER.test1.fyre.ibm.com\/datasets", "datasourcesUri": "\/providers\/Impact_NCICLUSTER.test1.fyre.ibm.com\/ datasources", "description": "Impact_NCICLUSTER", "externalProviderId": "Impact_NCICLUSTER", "id": "Impact_NCICLUSTER.test1.fyre.ibm.com", "label": "Impact_NCICLUSTER", "remote": true, "sso": false, "type": "Impact_NCICLUSTER", "uri": "\/providers\/Impact_NCICLUSTER.test1.fyre.ibm.com", "useFIPS": true }

2. Restart Dashboard Application Services Hub on your Operations Management on-premises installation by using the following commands.

cd JazzSM_WAS_Profile/bin ./stopServer.sh server1 -username smadmin -password password ./startServer.sh server1

where JazzSM_WAS_Profile is the location of the application server profile that is used for Jazz for Service Management. This is usually /opt/IBM/JazzSM/profile. Event Viewer count is not correct This issue concerns WebGUI and the Event Viewer. The filter count numbers represented by the colored icons on the toolbar of the Event Viewer, should show the total number of events,

Appendix A. Release notes 711 including all parent and child events in the list. However, when the user selects a filter, child events that match the filter selected but whose parents events do not, are showing in the count number but not in the list. Fail to connect to the primary cluster in Netcool/Impact GUI after failback When an Impact failback happens, the Netcool/Impact GUI is not able to identify which is the active primary cluster. Therefore, when the user logs into the Netcool/Impact GUI, an error will appear. This issue only concerns the user interface and doesn't affect the backend. To solve the problem, the user needs to delete all Netcool/Impact server pods available by using the example command:

kubectl delete pod -nciserver-0 -nciserver-1 ... -nciserver-n

This command synchronize again the system and allow downtime when the Netcool/Impact are being restarted. Once Netcool/Impact is running again, Netcool/Impact is available. Web GUI on premises is missing menu This issue occurs when you install Netcool Operations Insight on premises with a single Object Server. After updating Web GUI with Network Manager the navigation panel menu in Device Dashboard is missing. To solve this issue, ensure that Network Manager is updated to the latest fix pack and that Jazz for Service Management, Device Dashboard and WebSphere Application Server are running the latest versions. Unable to create new users in LDAP using WebSphere Application Server When a new user is created using the WebSphere Application Server, the UniqueName attribute references the defaultFileBasedRealm instead of LDAP. This means that the new user cannot be assigned to groups and therefore cannot be assigned roles in LDAP. Certificate Verification popup When logging in to DASH, an Information pop-up is displayed, requesting that certificates for some links be verified. These links do not work. This pop-up can be safely closed. IBM Netcool Operations Insight on OpenShift Scope-based grouping Scope based sub-grouping is not currently supported by the Cloud Native Event Analytics scoped- based grouping ObjectServer triggers. Failed ncoprimary, which does not restart successfully If ncoprimary fails and does not restart successfully, run the following command:

oc edit sts -ncoprimary-0

Increase the liveness and readiness by editing the following values:

initialDelaySeconds: periodSeconds: timeoutSeconds: failureThreshold:

Missing CEAEventScore column If the CEAEventScore column is missing from the alert list for the Example_IBM_CloudAnalytics system view, manually add the column to see probable root cause information for events in the alert list. When installing Netcool Operations Insight on OpenShift, occasionally the ibm-hdm- analytics-dev-eventsqueryservice pod fails to start; it is waiting for the eventsquery- checkforschema container If you notice that the ibm-hdm-analytics-dev-eventsqueryservice is taking a long time to start up, then perform the following troubleshooting steps: 1. Inspect the Event query service log file by running the following command:

oc logs name-ibm-hdm-analytics-dev-eventsqueryservice-random-string -c eventsquery- checkforschema

712 IBM Netcool Operations Insight: Integration Guide Where: • name is the name of your installation instance. • random-string is a random string of numbers and letters. 2. Look for a line that contains text similar to the following.

{"name":"checkdb","hostname":"noi-204-ibm-hdm-analytics-dev- eventsqueryservice-574cd4bc78ws2g", "pid":22,"level":30,"requirements": {"keyspace":true,"events":true,"events_byid":false}, "msg":"Requirements not met, re-check timer started.","time":"2020-06-29T22:30:08.610Z","v":0}

This log entry indicates that the required schema has only been partially created, causing the connection to the schema to time out, with the result that the dependent ibm-hdm- analytics-dev-eventsqueryservice pod fails to start. To resolve this issue, proceed as follows: 1. From the Kubernetes command line, issue the following command to list all of the pods on your cluster.

oc get pod

2. Find the archiving service pod. This pod has a name similar to the following:

name-ibm-hdm-analytics-dev-archivingservice-random-string

Where: • name is the name of your installation instance. • random-string is a random string of numbers and letters. For example:

mynoi-ibm-hdm-analytics-dev-archivingservice-8656c4dc8b-hxkzh

3. Copy the pod name from the previous step for use in the next command. 4. Issue the following command to enter the archiving service and create the missing elements of the schema.

oc exec -it name-of-archiving-service-pod /app/entrypoint.sh -- npm run setup

Where name-of-archiving-service-pod is the name of the archiving service pod. Note: This command will generate a number of harmless errors, which can be ignored. Runbook editor: only Windows keyboard shortcuts are displayed In the Runbooks GUI editor, only Windows keyboard shortcuts (for example, Ctrl+2) are displayed, even if you are running Netcool Operations Insight on a Macintosh. The workaround is to use the corresponding Macintosh shortcut; for example, Command+2, instead of Ctrl+2. Policies GUI shows zero policy count When you first log into the Policies GUI, if there are more than small number of policies then the policy count shows as 0 (zero). To work around this issue, filter the policy list by a given policy type, such as Temporal Grouping. The policy count now shows the count of that policy type. Resource is not deleted automatically when the user disable a feature after installation This issue occurs when users disable features such as topology management or cloud event management at post-install. The resource is not automatically deleted and to find out if the feature is deleted or not you need to check the operator log. Event timeline comments with accents on the characters or in non-Latin based alphabets do not render correctly In the Cloud GUI Events page, if you enter a comment in the event timeline with non-English characters, then the stored comment is not displayed correctly. For example:

Appendix A. Release notes 713 • If you type the comment in a European language such as Italian, any characters with accents do not display correctly. The stored comment is partially legible, depending on how many accented letters are in the comment. For example, if you enter the Italian text È assicurato un intervento h24 durante tutto l’anno then the system displays the stored comment as: Ã assicurato un intervento h24 durante tutto lâ anno. • If you type the comment in a non-Latin alphabet, such as Arabic, Hebrew, or Chinese, the stored comment is completely illegible.

This problem does not manifest itself in the classic Netcool Web GUI Event Viewer on DASH. Tooltip truncation Some tooltips might not display correctly. The native tooltip in Firefox is truncated on Windows operating systems. Slanted line A slanted line appears when two events with a single event instance and the same first and last occurrence are displayed in the policy details UI. nciserver pods aren't added to the NCICLUSTER when scaled up on a size 0 environment This issue occurs when Netcool Operations Insight is deployed on Red Hat OpenShift on a size 0 environment. nciserver-1 is not added to the NCIClUSTER after scaling up and deleting the nciserver pods. To test that nciserver-1 is not added to NCIClUSTER, delete the primary pod, nciserver-0. nciserver-1 doesn't detect that nciserver-0 was deleted because they are not in the same cluster and nciserver-1 is not taken as the primary pod. This issue occurs because scaling up from a trial deployment to a production post install is not supported. Spark line grows off the page for groups with large numbers of events This is an issue that occurs when Netcool Operations Insight is deployed at size 0 on a Red Hat OpenShift cluster and trained with a large number of events. In the temporal group viewer, groups with large numbers of events are causing issues with the spark-line which grows off the page preventing the group from being expanded. Manage Policies Widget This is an issue that occurs when Netcool Operations Insight is deployed at size 0 on a Red Hat OpenShift cluster and trained with a large number of events. The temporal policies list doesn't provide access to policy inspection and therefore to the timeline graph, membership events or to the approve or reject option. The policies names are not clickable. To avoid this issue it is necessary to open the Event viewer before opening the policy list. noi-proxy unable to recover after OpenShift upgrade or master nodes reboot This problem occurs when the user reboots all master nodes or performs an OpenShift upgrade. The noi-proxy looses connection to the Kubernetes API and the proxy pod remains in a broken state. This will prevent remote Netcool/OMNIbus clients to connect. To solve this issue the pod needs to be manually restarted. For demo/trial deployments, PodDisruptionBudgets (PDBs) block the upgrade of Red Hat OpenShift When Netcool Operations Insight is deployed at size 0 (demo or trial deployment), PodDisruptionBudgets block the upgrade of OpenShift. PDBs can be deleted by running the following commands before upgrading OpenShift:

oc get pdb | grep oc delete pdb -l release=noi-release-name --all

The oc get pdb command returns the PDB names, for example: noi-cassandra-pdb. Each PDB must be deleted with the oc delete pdb command. Netcool Operations Insight pods detect topology management SecurityContextConstraints (SCC) This is a known issue that occurs when you deploy topology management and Netcool Operations Insight on a clean Red Hat OpenShift cluster. The configured SCC across the Netcool Operations Insight pods uses the topology management SCC, ibm-netcool-asm-prod-scc, and not the intended Netcool Operations Insight SCC. Both topology management and Netcool Operations Insight are deployed into the same namespace, with topology management deployed before

714 IBM Netcool Operations Insight: Integration Guide Netcool Operations Insight. The topology management capability is deployed across to the default service account with the topology management SCC applied across the entire namespace. Therefore, the Pod Admission Controller selects the topology management SCC instead of the Netcool Operations Insight one. cloud native analytics kafka clients fail to re-establish a connection to kafka brokers on kafka restart When the kafka pod is about to restart, various kafka clients applications fail to re-establish a connection to kafka. These applications presenting this issue are for example inferenceservice, ea-asm-normalizer-normalizerstreams and ea-asm- normalizer-mirrormaker. This is a known issue with the common-kafka containers and occurs when using a size0 deployment. The details of the issue are the following:

WARN [2020-02-05 10:26:52,183] org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:52,183] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-2, groupId=0] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:52,183] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-5, groupId=0] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:52,183] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-3, groupId=0] Connection to node 0 could not be established. Broker may not be available. INFO [2020-02-05 10:26:54,906] com.ibm.itsm.topology.kafka.client.api.ConsumerWorker: Consumer topic [ea-asm-enriched-events] Read 1 records from Kafka INFO [2020-02-05 10:26:54,907] com.ibm.itsm.topology.kafka.client.api.ConsumerWorker: Consumer topic [ea-asm-enriched-events] Read 1 records from Kafka INFO [2020-02-05 10:26:54,907] com.ibm.itsm.topology.kafka.client.api.ConsumerWorker: Consumer topic [ea-asm-enriched-events] Read 2 records from Kafka INFO [2020-02-05 10:26:54,914] com.ibm.itsm.topology.kafka.client.api.ConsumerWorker: Consumer topic [ea-asm-enriched-events] Read 3 records from Kafka INFO [2020-02-05 10:26:54,914] com.ibm.itsm.topology.kafka.client.api.ConsumerWorker: Consumer topic [ea-asm-enriched-events] Read 1 records from Kafka WARN [2020-02-05 10:26:55,255] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-2, groupId=0] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:55,255] org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:55,255] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-5, groupId=0] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:55,256] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-3, groupId=0] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:58,327] org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:58,327] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-3, groupId=0] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:58,327] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-5, groupId=0] Connection to node 0 could not be established. Broker may not be available. WARN [2020-02-05 10:26:58,327] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-2, groupId=0] Connection to node 0 could not be established. Broker may not be available.

To solve the issue, delete the pods that are not working. Monitoring events are not stored in the historical database This is a known issue. As a workaround, restart the scala pod. Topology icons not consistently appearing in the event viewer for topology management originating events Using the topology management Rest Observer to publish topology management topology with status and inspect the resultant events in the Cloud Native Analytics event viewer, we expect all events associated with topology entities to have a topology icon to facilitate launching in context to the topology view. However, a high percentage of the events don't have a topology icon when they should.

Appendix A. Release notes 715 Filter at temporal group does not work for clear events In a Temporal Group containing events with a variety of severity, when you click the filter button beside the "Example_IBM_ CloudAnalytics" view and you click "Clear", nothing happens. However, if you click on other severities, for example, "severity Major" it works as expected. Internet Explorer on hybrid deployment: Not able to open CNEA related windows - "Content was blocked because it was not signed by a valid security certificate" These are the steps as a workaround for Internet Explorer users: 1. On Internet Explorer > Go to Internet options 2. Go to Advanced tab > Uncheck 'Warn about certificate address mismatch' 3. Login to DASH 4. Pop up window - Click on 'Cloud Analytics'. A new window opens. Select the 'More Information' drop down and select 'Go on to the Webpage'. The Certificate error is in the browser URL bar. 5. In the address/URL bar of the main browser window, click on 'Certificate error' > View Certificate 6. Go to Certificate Path tab > Select 'ingress-operator@' > View Certificate 7. Install Certificate > Select 'Place all certificate in the following store' > Browse > Select 'Trusted Root Certificate Authorities' > OK > Next > Finish 8. A new window opens to ask 'Do you want to install this certificate'. Select Yes Information message about untrusted content after login After logging in to Operations Management, an information message is displayed:

This console displays content from the below servers that appear to be using untrusted certificates.

Content from untrusted servers cannot be viewed in the console.

Please click each link below to open a new window where you may determine if you would like to accept the certificate and allow the content to be displayed.

When you are done, you may close the new windows and dismiss this warning.

This is a known issue. The pop up window can be closed without following the links. Replaying historic events on hybrid installation causes an error On hybrid installations the replaying of historic events to an SSL enabled ObjectServer is not supported. Attempts to do so will result in a write: broken pipe error. Multiple icons displayed in Event viewer The Event viewer displays multiple icons for the same correlation type against each event. Countdown timer on Incident Viewer is not reset when refresh is clicked When refresh is clicked the countdown timer is not reset to 0 as it should be. The view is still correctly refreshed. Policy timeouts with error When selecting a policy from Insights->Manage Policies, the policy will not load and displays the following error:

An error occurred while fetching data from the server. Please make sure you have an active internet connection. More...

You may sometimes hit this issue if you open a policy which has a large number of event instances. Analytics services generating large log files Disks can fill up very fast with log files for each service instance. As a workaround, increase disk space in the docker containers space (the /var/lib/docker directory).

716 IBM Netcool Operations Insight: Integration Guide Blank window displayed when selecting Event Viewer from side bar of new tab in Internet Explorer V11 When you open the Incident Viewer and select a new tab, the new tab opens displaying the Incident view and a side bar with Event Viewer and Policies. If you select Event Viewer from the side bar, a blank window is displayed. Default groups are missing from Tools Configuration > Access Criteria When you log in to IBM Netcool/OMNIbus Web GUI and go to Tools Configuration > Access Criteria, the default groups, such as icpadmins, are missing under the "Access Criteria" section. Authentication failure - cannot log in to Web GUI when external LDAP is configured Unable to login to Web GUI using LDAP credentials. Configuring to an external LDAP server is not supported in this release. Authentication failure - ncobackup cannot authenticate user "root" when ncoprimary is restarted The following errors are listed in the ncobackup pod logs during startup:

Server 'AGG_B' initialised - entering RUN state. Updating default root password 2019-06-13T22:11:43: Error: E-OBX-102-023: Failed to authenticate user root. (-3600:Denied) 2019-06-13T22:11:43: Error: E-OBX-102-057: User root@-ncobackup-0 failed to login: Denied Failed to connect Error: Failed to get login token Unable to update root password

The following error is seen in the objectserver logfile:

cat /opt/IBM/tivoli/netcool/omnibus/log/AGG_B_audit_file.log 2019-06-13T22:11:43: Error: E-SEC-010-002: authentication failure - cannot authenticate user "root" : Denied

This error can be ignored. The ncobackup pod tries to change the default password and fails, because it has already been changed to the newer version. UI throws "Pod Security Conflict" warning when selecting namespace during a deployment of Operations Management on Red Hat OpenShift The following warnings are displayed: Pod Security Conflict This chart requires a namespace with a ibm-priviliged-psp pod security policy Pod Security Warning Your ICP cluster is running all namespaces Unrestricted (ibm-anyuid- hostpath-psp) by default. This could pose a security risk Red Hat OpenShift does not use pod security policies (PSP), and instead uses security constraint contexts (SCCs). This warning can be ignored, if you have correctly configured a security constraint context in Configuring pod access control. loadSampleData.sh script fails The loadSampleData.sh fails to run as a result of the aggregation dedup service pod timing out it's redis connection. After a while, the service stops connecting if redis is down. To work around this issue, restart the stateless aggregation dedup pods. Run the kubectl get pod | grep dedup command to get a list of dedup pods, then run the kubectl delete pod command for each dedup pod. Auto-deploy mode is enabled after running the loadSampleData.sh script. If you have installed Operations Management in manual deploy mode, running the loadSampleData.sh script switches your configuration to auto-deploy mode. To disable auto-deploy mode, re-run a portion of the training by running the runTraining.sh script. Get the start and end times of the sample data as in the following example:

Appendix A. Release notes 717 kubectl delete pod ingesnoi3; kubectl run ingesnoi3 -i --restart=Never --env=LICENSE=accept --image=/ea/ea-events-tooling: getTimeRange.sh samples/ demoTrainingData.json.gz pod "ingesnoi3" deleted {"minfirstoccurence":{"epoc":1552023064603,"formatted": "2019-03-08T05:31:04.603Z"},"maxlastoccurrence":{"epoc": 1559729860924,"formatted":"2019-06-05T10:17:40.924Z"}}

Re-run the training with seasonality disabled, as in the following example:

kubectl delete pod ingesnoi3; kubectl run ingesnoi3 -i --restart=Never --env=LICENSE=accept --image-pull-policy=Always --image= /ea/ea-events-tooling: runTraining.sh -- -r test-install -a SEASONALITY -s 1552023064603 -e 1559729860924 -d false

Where is the repository name and -s and -e are the start and end times returned by the getTimeRange() command and -d false disables live policy updates. Authentication errors reported in logs when ncoprimary and ncobackup pods are restarted. The following errors can be seen in the ncoprimary and ncobackup pod logs when the ncoprimary and ncobackup pods startup:

Information: I-SEC-104-002: Cannot authenticate user "root": Denied Error: E-OBX-102-023: Failed to authenticate user root. (-3600:Denied) Error: E-OBX-102-057: User root@dev401-ncoprimary-0 failed to login: Denied Failed to connect Error: Failed to get login token Unable to update root password

These errors are caused by the objectserver starting up and attempting to change the default root password, which has already been changed. Unable to make TLS connections using administrator GUI versions earlier than V8.1.0.18 Versions of administrator GUI nco-config earlier than V8.1.0.18 fail to make a TLS connection. This is a known issue. If you want to use the secrets that are automatically created by Operations Management, you must disable FIPS mode If the certificate is automatically generated and FIPS mode is specified with the $NCHOME/etc/ security/fips.conf file, the Netcool/OMNIbus client fails to make a connection. The CT- LIBRARY error error message is displayed. If you want to use the secrets that are automatically created, then you must disable FIPS mode. Do this by removing the following file: $NCHOME/etc/ security/fips.conf file. Additional Insight Pack installation does not persist After you install an additional Insight Pack with Operations Analytics - Log Analysis and restart your pod, the Insight Pack disappears. This is a known issue. As a workaround, reinstall the Insight Pack. Operations Analytics - Log Analysis dashboard does not persist After you create a Operations Analytics - Log Analysis dashboard with a search result widget and restart your pod, the dashboard disappears. This is a known issue. As a workaround, recreate the dashboard. Outage of Event Analytics pages or features Following an Impact Server failover, Event Analytics pages or features might not be available for a short time. This is a known issue. New method to access the status information of Netcool/Impact Name Server instances The method to access the status information of Netcool/Impact Name Server instances has changed from the method documented in the Netcool/Impact documentation at https:// www.ibm.com/support/knowledgecenter/SSSHYH_7.1.0.18/com.ibm.netcoolimpact.doc/admin/ imag_gui_server_viewing_nameserver_status_c.html . If you are running Netcool/Impact under Operations Management v1.5.0.1, then use the following method to access this status information:

718 IBM Netcool Operations Insight: Integration Guide 1. Run the following command:

helm status release-name --tls

Where release-name is the release name for your deployment. This command generates output to your console. 2. In the output navigate to the Impact Servers section, which looks similar to the following snippet:

Impact Servers: Update your hosts file (On the machine you are running your Browser) or your DNS settings with this mapping $NODE_IP nci-0.release-name.master-node $NODE_IP nci-1.release-name.master-node

firefox https://nci-0.release-name.master-node/nameserver/services firefox https://nci-1.release-name.master-node/nameserver/services

3. Copy and paste the URLs into your browser to retrieve the Netcool/Impact name server instance status. Device Dashboard Device Dashboard is unable to differentiate between anomaly thresholds configured for poll definitions that have the same metric name The Device Dashboard is unable to differentiate between anomaly thresholds configured for poll definitions that have the same metric name. To ensure that metrics are correctly distinguished in the Device Dashboard Performance Insights portlet ensure that any poll definitions created have a unique metric name. However, this workaround does not apply to the Default Chassis Ping and Default Interface Ping poll definitions, which by default both use a metric with the name PingTime. The consequences of this are best illustrated using an example: 1. Assume you set and enable an anomaly threshold on the Default Chassis Ping poll definition, with the intention of viewing anomalies against the chassis ping time metric. 2. Assume also that you do not set an anomaly threshold on the Default Interface Ping poll definition, as you do not want to view anomalies against the interface ping time metric. 3. However, the threshold you set and enabled on the Default Chassis Ping poll definition will apply to both the chassis and the interface ping time metrics. Consequently in the Interfaces tab of the Performance Insights portlet, selecting PingTime from the Metric drop-down list will return unexpected anomalies in the portlet. The workaround is to set and enable the same thresholds on both the Default Chassis Ping and Default Interface Ping poll definitions. Tooltip on sparkline in Performance Insights portlet displays unnecessary scroll bar The sparklines in the Performance Insights portlet each have an associated tooltip. Some of these tooltips display an unnecessary vertical scrollbar. This issue can safely be ignored. Performance Insights portlet: sorting the data by Value does not work In the Interfaces tab within the Performance Insights portlet sorting the data by clicking the Value column does not work. When launching a new Device Dashboard by right-clicking an event in the Event Viewerwithin an existing Device Dashboard the Device Dashboard does not refresh with the new entity identifier When launching a new Device Dashboard by right-clicking an event in the Event Viewer within an existing Device Dashboard the newly launched Device Dashboard does not refresh with the entity identifier corresponding to the selected event. To work around this issue, select Event Viewer from the Dashboard Application Services Hub navigation, select the relevant event and launch the Device Dashboard from there.

Appendix A. Release notes 719 Integration problem: Widgets do not open when selecting "Show Traffic" When right-clicking and selecting Show Traffic from any network topology (for example, Network Hop View), widgets do not open. To work around this issue, find the device in Event Viewer , right-click and select Show Traffic. Device Dashboard does not launch from a Network Hop View opened within a new browser tab If you launch a Network Hop View within a new browser tab by running the Find in > Network Hop View from any other GUI, then you will not be able to launch the Device Dashboard from that Network Hop View. Performance Insights portlet filter values are cleared on refresh of Device Dashboard You can apply filters in the Device Dashboard Performance Insights portlet, in both the Device and Interfaces tabs. However, when the Device Dashboard is refreshed, any values entered into the Interfaces tab filter field are automatically cleared. You will need to enter filter values again after the refresh. Note: This issue does not affect values entered into the Device tab filter field. Performance Insights portlet filter values are cleared when switching tabs When you switch from the Interfaces to the Device tab in the Device Dashboard Performance Insights portlet, any values entered into the Interfaces tab filter field are automatically cleared. You will need to enter filter values again. Note: This issue does not apply when you switch from the Device to the Interfaces tab. When portlet preferences are saved Device Dashboard portlets are not displayed properly After changing portlet preferences in the following Device Dashboard portlets, unexpected results occur in the following portlets: • Performance Insights portlet: metric and count data disappear from the portlet, and the preferences do not take effect. • Performance Timeline portlet: timeline data disappears; however, the portlet preferences are saved and are automatically applied when graph is next rendered, either automatically on the next refresh or by clicking a sparkline. To resolve this issue, do the following: • Performance Insights portlet: click a device in the Topology portlet. This forces a manual refresh of the Performance Insights portlet. • Performance Timeline portlet: click any sparkline in the Performance Insights portlet. This reinstates the Performance Timeline portlet and refreshes the content of the portlet. Occasionally the content of the Device Dashboard Performance Insights portlet does not update when you select a different interface metric When selecting a new interface metric from the Metric drop-down list in the Performance Insights portlet Interfaces tab, the content of portlet occasionally does not update. To work around this issue, select the metric twice from the drop-down list. Event Analytics If your problem is not listed in this section, then refer to Troubleshooting Event Analytics for additional issues. In the View Related Events portlet the Related Event Groups panel right-click menu commands momentarily appear in capital letters If you perform the following sequence of steps in the View Related Events portlet then the commands in the Related Event Groups panel right-click menu momentarily appear in full upper- case. 1. Run a configuration. 2. Navigate to the View Related Events portlet. 3. On the Related Event Groups panel, right-click a group and select to either deploy, watch, or archive the group.

720 IBM Netcool Operations Insight: Integration Guide 4. As soon as Deploy, Watch, or Archive is selected and before the group has moved, right click the related events group again. The right-click menu appears with capital letters. During installation of Event Analytics if the Derby database becomes corrupted reset it to default state During the installation of Event Analytics, if the Derby database becomes corrupted you must reset it to default state by performing the following steps: 1. Navigate to $IMPACT_HOME/install/dbcore and find the zip archive named ImpactDB_NOI_FP15.zip. 2. Stop all of the Netcool/Impact servers in the cluster, including the primary and secondary servers running the Derby database. 3. Back up the existing directory structure in $IMPACT_HOME/db/Server_Name/derby, where Server_Name is the name of the Netcool/Impact server. Once the backup is complete, remove the directory structure. 4. Copy the ImpactDB_NOI_FP15.zip zip file found in step 1 to $IMPACT_HOME/db/ Server_Name/derby and unzip it there. 5. Start the primary Netcool/Impact server and allow it enough time to fully initialize. 6. Start the secondary Netcool/Impact server and allow it to resynchronize from the primary server. Ensure that the property impact.featureToStringAsSource.enabled is set to true The property impact.featureToStringAsSource.enabled must be set to true in order for Event Analytics to work correctly. Set the following value in the Netcool/Impact properties file and run the nci_trigger command to activate the property, as described in the following topic: Netcool/Impact Knowledge Center: nci_trigger.

impact.featureToStringAsSource.enabled=true

Seasonal event not matching original selection when opting to trigger rule off events related to the seasonal event When you create a rule based on a seasonal event, if that seasonal event has associated related events then you have the option to trigger the rule based on one or more of those related events, by selecting the Edit Selection... button in the Create Rule screen. When you select this option, the originally selected seasonal event should be in the list and should be selected. Occasionally you might find that the event selected is not the originally selected seasonal event. In this case, you need to search for the desired seasonal event and reselect it. Unable to configure an event pattern in the Event Patterns GUI due to empty Trigger Action section This occurs if, during the configuration of event pattern processing that is performed as part of Event Analytics system setup, the historical event database column that was chosen for the default event type does not contain any data. View Related Events > Groups: right-click menu items displayed in capital letters If you Deploy, Watch, or Archive a related event group from the right-click menu in the View Related Events > Groups panel and then immediately right-click again to access the menu before the group has moved, the menu appears with capital letters. There is no impact on moving groups in this manner. Parts of the Events Pattern screen not displaying because an invalid group is selected. If you attempt to create an event pattern with groups that have EventID as null, the check boxes for Trigger Action, Event Type and Resource Columns do not appear on the Pattern Criteria tab. An error message is displayed if you try to save the pattern. To avoid this issue, ensure that only valid groups are used to create patterns.

Appendix A. Release notes 721 Chart data retrieved from the server is incorrectly formatted Problems can occur when displaying Seasonal Event Graphs if you are experiencing network latency or a slow network connection between your browser and the DASH/Impact machine. In such a scenario, an incomplete graph page might be displayed with the following error message:

Chart data retrieved from the server is incorrectly formatted TypeError: Cannot read property 'grammar' of null

Event Analytics configurations are deleted on Netcool/Impact cluster node failover Following a Netcool/Impact cluster node failover, the running configuration will be correctly stopped but any queued configurations will be lost on the secondary node. These configurations cannot be retrieved and must be recreated on the secondary node. For more information, see the following technote: http://www.ibm.com/support/docview.wss? uid=swg22012656 . Suggested pattern with a blank Event Type parameter field When editing a suggested pattern, the Event Type parameter field will be empty if the property type.0.eventtype is set to a value that is empty in the database. To avoid this issue, ensure that the type.0.eventtype property is not set to an empty value in the Event History Database. Selecting an event type field that contains all null values in the history database will result in the pattern criteria section of the create or edit pattern screen appearing blank. Blank fields during creation of synthetic event on non-occurrence. Users are given the option to supply one or more values for additional columns by selecting Set additional fields. With the exception of the values specified in the Create Event window, only the values of Node and Summary are copied into the synthetic event. Event summary is truncated The event summary is occasionally truncated in the Related Events Details portlet Timeline view. To view the event summary, modify the screen resolution temporarily. Seasonal event rule with time condition does not run A seasonal event rule, with an action to run after a specific time that is more than 25 seconds, is not run. To ensure that a seasonal event rule that is scheduled to run after a specific time runs correctly, select a time condition of less than or equal to 25 seconds. Edit selection window hangs when 'selecting all' for a large number of related events during seasonal event rule creation When creating a seasonal event rule for an event with a large number of related events, you can check the Select all related events check box to associate all the related events with the seasonal event rule. The problem occurs when a large number of related events, on the order of 1000 or more, are selected and Edit selection is clicked. The Edit selection window is displayed but it remains in a loading state. To avoid this issue, split the report into smaller reports and create rules around reports with fewer related events. Incorrectly displaying 'No event selected' error message When creating a pattern for related events and clicking Use Selected Event as Template, if you have not selected an event, the system correctly displays the 'No event selected' error message. However, if you then do select an event and click Use Selected Event as Template again, the error message persists In this case, you can disregard and close the error message. It will not affect the creation of the pattern. When testing an event pattern collapsed sections do not display correctly when reopened Following testing of an event pattern, if you collapse the Groups, Group Instances, and Events sections in the Test tab of the Events Patten GUI using the splitters provided and then you

722 IBM Netcool Operations Insight: Integration Guide expand the sections again, the data columns in the Groups and Events panels become very narrow and the data cannot be read. To work around this issue, you can do one of the following: • Resize the window frame. This causes the columns to resize so that the data becomes visible. • Manually resize the column widths or refresh the page. Either of these actions causes the columns to be displayed correctly again Event Search Non-alphanumeric characters do not work in Event Viewer search If non-alphanumeric characters are used in an event search from the Event Viewer, then these characters are replaced with their ASCII equivalent. This causes the search to be performed with the incorrect characters, and events that should have matched the search term are not returned in the search results. To resolve this, edit the search field and replace the string that has ASCII characters in it with the original search term that you entered. Severity colors in Operations Analytics - Log Analysis charts use random colors for event severity values Note: This issue is fixed in version 1.4.1.2. Operations Analytics - Log Analysis charts, such as Event Trend by Severity, Event Storm by AlertGroup, and Severity Distribution, use random colors for chart elements that represent event severity values, and do not use the standard event severity colors used in Web GUI Event Viewer. For example, a pie chart in the Severity Distribution chart portlet might contain a blue pie chart segment representing events of critical severity, even though the standard color for critical severity in the Web GUI Event Viewer is red. Furthermore, if two or more charts are presented on a dashboard, then it is likely that the color used for a given event severity in one chart portlet on that dashboard page will differ from the same event severity in another chart portlet on that same dashboard page. Note: Each chart has its own severity color legend. Refer to the severity color legend on each chart to determine which colors are used on that chart to denote different severity values. This use of random colors that differ from the standard event severity colors used in Web GUI Event Viewer is intentional, and is meant to enable you to distinguish between event severity values on a chart. HeatMap chart types do not render correctly Note: This issue is fixed in version 1.4.1.2. The HeatMap type charts fail to render when using IBM Operations Analytics - Log Analysis V1.3.3, displaying the following error: Parameter category is invalid for Heat Map. The error is caused by a difference in the way charts are defined in Operations Analytics - Log Analysis 1.3.2 and 1.3.3. The following charts are affected: • In OMNIbusInsightPack > Last Day > Dynamic Event Dashboard: – Hotspots by Node and Alert Group – Hotsport by AlertGroup and Severity • In OMNIbus_Static_Dashboard, opened from the Web GUI EventSearch tab by right-clicking and selecting Show Event Dashboard by Node: – Hotspots by Node and AlertGroup – Hotspots by AlertGroup and Severity To correct this problem, edit the following chart specification files, and search for all instances of category and change them to categories:

Appendix A. Release notes 723 • $UNITY_HOME/AppFramework/Apps/OMNIbusInsightPack_v1.3.0.2/Last_Day/ OMNIbus_Dynamic_Dashboard.app • $UNITY_HOME/AppFramework/Apps/OMNIbusInsightPack_V1.3.0.2/ OMNIbus_Static_Dashboard.app Right-click keyword search results hidden by default This issue affects Event Search prior to Operations Analytics - Log Analysis version 1.3.5. When using the Event Search right-click menu within Web GUI and selecting Show keywords and event count, the search results in Event Search are empty. The problem is caused by the Search Patterns pane being hidden by default in the Operations Analytics - Log Analysis user interface. To view the Show keywords and event count results, click the Restore Section triangle icon on the left side of the Event Search user interface. Hotspots by Node, AlertKey not displaying. If the Hotspots by Node, Alert Group and AlertKey chart fails to display in the Last_Month- >Operational Status Dashboard the SOLR heap size might need increasing. Note: The Hotspots by Node, Alert Group and AlertKey chart is CPU intensive and can be slow to render for systems with a large amount of data. More information: https://www.ibm.com/support/knowledgecenter/SSPFMY_1.3.5/ com.ibm.scala.doc/tshoot/iwa_solr_heap_issue_c.html and IBM SmartCloud® Analytics - Log Analysis Performance and Tuning Guide. Hover values for the Event Trend by Severity charts do not appear to match the axes. When hovering over a point on a particular severity the values returned might not appear to match the axes on the chart. It is because the hover values represent that severity only, whereas the values on the axes are cumulative. For example, if there are 20 Intermediate severity events and 26 Major severity events displayed on the line above, the Major events will appear against 46 on the Y-axis. Drill down does not return results. The drill down function is not available for the type Omnibus Tree Map. This affects some of the charts in the Operational Efficiency dashboard in the Last Month folder, and in the OMNIbus Operations Manager and OMNIbus Spot Important Alerts dashboards in the Last Hour folder. If drill down is required for these charts you can use the default Tree Map specification instead. To change specification, click the chart settings icon on the right of the chart and change Chart Type from OMNIbus Tree Map to Tree Map. Help text not rendered correctly for Event Analysis and Reduction in bi-directional locales. This affects the help text in the Event Analysis and Reduction dashboards for Analyze and Reduce Event Volumes, Introduction to the Apps, and Helpful Links. The help text is not rendered correctly in the Arabic and Hebrew locales. You can view text directly as an HTML file in a browser that supports bi-directional locales. The relevant files are Analyze_and_reduce_event_volumes.html, Introduction_to_the_Apps.html,and Helpful_links.html. The files are located in $UNITY_HOME/AppFramework/Apps/OMNIbusInsightPack_v1.3.0.2/locale/ /LC_MESSAGES. Globalization The following issues affect the non-English language versions of Netcool Operations Insight. Event Search: count and time stamp hover help are not rendered into a non-English language in some charts This issue affects the OMNIbusInsightPack_v1.3.1. The following texts are not rendered into a non-English language and appear in English only: • Hover help for time stamp and count in stacked bar charts, heat maps and pie charts • Legend for the Count field in the "Hotspots by Node and AlertGroup" chart and "Hotspots by AlertGroup and Severity" chart of the xxxOMNIbus Static Dashboard custom app

724 IBM Netcool Operations Insight: Integration Guide • Legend for the Count field in the "Last Day - Hotspots by AlertGroup and Severity" chart, "'Last Day - Event Storm by Node" chart, and "Last Day - Hotspots by Node and AlertGroup" of the OMNIbus Dynamic Dashboard custom app Network Health Dashboard: In the Top Performers widget, the Japanese rendering of the time to refresh is not displayed correctly In the Top Performers widget, the Japanese rendering of the time to refresh is not displayed correctly. The correct word order in Japanese has the number preceding the text; however, the word order displayed has the number following the text. Topology Search: error message only partially rendered into non-English languages This issue affects the Network Manager Insight Pack V1.3.0.0. If you attempt to run a topology search between two nodes on the same device, the error message that is displayed is only partially rendered into non-English languages. The error message in full is as follows: An error occurred calculating the path between 'X' and 'X', The source and destination Node's cannot be the same (Where X is the value of the NmosObjInst for the device. The first half of the message, An error occurred calculating the path between 'X' and 'X', is rendered into non-English languages. The second half of the message The source and destination Node's cannot be the same is not rendered into non-English languages and always appears in English. IBM Installation Manager Console mode: cannot install Netcool/OMNIbus core and Web GUI or Netcool/Impact at the same time. When you install Netcool/OMNIbus core and Web GUI or Netcool/Impact at the same time with Installation Manager - console mode, the installation paths for Web GUI and Netcool/Impact are not prompted for and the installation fails. If you are performing the installation with Installation Manager - console mode, you must install the components separately. Note: Installing Jazz for Service Management and IBM WebSphere Application Server is not supported for Installation Manager - console mode. Network Health Dashboard The known problems vary depending on which version of Network Manager is installed. If you have Network Manager 4.2.0.4 installed There are no known problems for the Network Health Dashboard. If you have Network Manager 4.2.0.3 installed The Network Health Dashboard has the following known problems. Must install Network Health Dashboard with the Network Manager GUI Components For Fix Packs 1, 2, and 3, if you want to install the Network Health Dashboard, you must install it in the same IBM Installation Manager session as the Network Manager GUI Components. Otherwise, some interaction between widgets and pages might not work. If you upgrade the Network Manager GUI Components from Fix Pack 1 to 2, or 2 to 3, you must upgrade the Network Health Dashboard in the same IBM Installation Manager session. If you roll back the Network Manager GUI Components from Fix Pack 2 to Fix Pack 1, you must roll back the Network Health Dashboard in the same IBM Installation Manager session. If you roll back the Network Manager GUI Components from Fix Pack 3 to Fix Pack 2, you do not need to roll back the Network Health Dashboard in the same IBM Installation Manager session. If you encounter problems with upgrading or rolling back the Network Manager GUI Components, apply the following workaround: Uninstall the Network Health Dashboard, retry the GUI upgrade or rollback, then reinstall the Network Health Dashboard.

Appendix A. Release notes 725 Network Health Dashboard does not work after rolling back from Fix Pack 3 If you update from Network Manager 4.2 to 4.2 Fix Pack 3 and then roll back to 4.2, the Network Health Dashboard does not function. Take a backup of your system before upgrading, and restore the backup instead of rolling back the product. You must install the Network Health Dashboard with the Network Manager GUI Components If you want to install the Network Health Dashboard. you must install it in the same IBM Installation Manager session as the Network Manager GUI Components. Otherwise, some interaction between widgets and pages might not work. You must roll back the Network Manager GUI Components and the Network Health Dashboard packages together If you have installed the Network Manager 4.2 GA release and the Network Health Dashboard, and you then updated to 4.2 Fix Pack 1, and at this point you want to roll back one or more packages, then you must roll back the Network Manager GUI Components and the Network Health Dashboard packages together. If you roll back one of these packages without the other, there is a risk that your system will become corrupted. Operations Analytics - Log Analysis Operations Analytics - Log Analysis installation fails Operations Analytics - Log Analysis V1.3.x requires a minimum 5 GB disk space on all supported operating systems. If less than 5 GB is found, the installer hangs. No error message is displayed. If this problem occurs, terminate the installer.

Support IBM Electronic Support offers a portfolio of online support tools and resources that provides comprehensive technical information to diagnose and resolve problems and maintain your IBM products. IBM has many smart online tools and proactive features that can help you prevent problems from occurring in the first place, or quickly and easily troubleshoot problems when they occur. For more information, see: https://www.ibm.com/support/home/ Related concepts Solution overview Read about key concepts and capabilities of IBM Netcool Operations Insight.

726 IBM Netcool Operations Insight: Integration Guide Appendix B. Notices

This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd. 1623-14, Shimotsuruma, Yamato-shi Kanagawa 242-8502 Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement might not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation 2Z4A/101 11400 Burnet Road Austin, TX 78758 U.S.A.

© Copyright IBM Corp. 2020, 2020 727 Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. If you are viewing this information in softcopy form, the photographs and color illustrations might not be displayed.

Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml. Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, other countries, or both.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other product and service names might be trademarks of IBM or other companies.

728 IBM Netcool Operations Insight: Integration Guide

IBM®

Part Number:

Printed in the Republic of Ireland (1P) P/N: SC27-8601-13