Ultra M Solutions Guide, Release 5.1.x
First Published: May 31, 2017 Last Updated: August 13. 2018
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
Cisco Systems, Inc. www.cisco.com
1
Ultra M Solutions Guide, Release 5.1.x
Conventions
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of
Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS -NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
All printed copies and duplicate soft copies are considered un-Controlled copies and the original on-line version should be referred to for latest version.
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices.
© 2018 Cisco Systems, Inc. All rights reserved.
2
Ultra M Solutions Guide, Release 5.1.x
Conventions
Table of Contents Ultra M Solutions Guide, Release 5.1.x ...... 1 Conventions ...... 6 Obtaining Documentation and Submitting a Service Request ...... 7 Ultra M Overview ...... 8 VNF Support ...... 8 Ultra M Models ...... 8 Functional Components ...... 9 Virtual Machine Allocations ...... 10 Hardware Specifications ...... 12 Cisco Catalyst Switches ...... 12 Cisco Nexus Switches ...... 13 UCS C-Series Servers ...... 14 Software Specifications...... 22 Networking Overview ...... 23 UCS-C240 Network Interfaces ...... 23 VIM Network Topology ...... 24 Openstack Tenant Networking ...... 27 VNF Tenant Networks ...... 28 Layer 1 Leaf and Spine Topology ...... 30 Deploying the Ultra M Soution ...... 50 Deployment Workflow...... 50 Plan Your Deployment ...... 50 Install and Cable the Hardware ...... 50 Configure the Switches ...... 58 Prepare the UCS C-Series Hardware ...... 58 Deploy the Virtual Infrastructure Manager ...... 72 Configure SR-IOV ...... 78 Deploy the UAS, the VNFM, and the VNF ...... 83 Event and Syslog Management Within the Ultra M Solution ...... 84 Syslog Proxy ...... 84 Event Aggregation ...... 89
3
Ultra M Solutions Guide, Release 5.1.x
Conventions
Install the Ultra M Manager RPM ...... 96 Restarting the Ultra M Manager Service ...... 97 Uninstalling the Ultra M Manager ...... 99 Encrypting Passwords in the ultram_cfg.yaml File ...... 100 Configuring Fault Suppression ...... 101 Configuring Disk Monitoring ...... 107 Appendix: Network Definitions (Layer 2 and 3)...... 109 Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures ...... 113 Prerequisites ...... 113 Deploying the Undercloud ...... 113 Deploying the Overcloud ...... 116 Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures ...... 127 Introduction ...... 127 Prerequisites ...... 127 Deploying the Undercloud ...... 127 Deploying the Overcloud ...... 130 Appendix: Example ultram_cfg.yaml File ...... 134 Appendix: Ultra M MIB ...... 137 Appendix: Ultra M Component Event Severity and Fault Code Mappings...... 143 OpenStack Events ...... 144 UCS Server Events ...... 149 UAS Events ...... 149 Appendix: Ultra M Troubleshooting ...... 151 Ultra M Component Reference Documentation ...... 151 Collecting Support Information ...... 153 About Ultra M Manager Log Files ...... 156 Appendix: Using the UCS Utilities Within the Ultra M Manager ...... 158 Perform Pre-Upgrade Preparation ...... 159 Shutdown the ESC VMs ...... 162 Upgrade the Compute Node Server Software ...... 162 Upgrade the OSD Compute Node Server Software ...... 164 Restarting the UAS and ESC (VNFM) VMs...... 167
4
Ultra M Solutions Guide, Release 5.1.x
Conventions
Upgrade the Controller Node Server Software ...... 168 Upgrading Firmware on UCS Bare Metal ...... 170 Upgrading Firmware on the OSP-D Server/Ultra M Manager Node ...... 173 Appendix: ultram_ucs_utils.py Help ...... 175
5
Ultra M Solutions Guide, Release 5.1.x
Conventions
This document uses the following conventions.
Convention Indication
bold font Commands and keywords and user-entered text appear in bold font.
italic font Document titles, new or emphasized terms, and arguments for which you supply values are in italic font.
[ ] Elements in square brackets are optional.
{x | y | z } Required alternative keywords are grouped in braces and separated by vertical bars.
[ x | y | z ] Optional alternative keywords are grouped in brackets and separated by vertical bars.
string A nonquoted set of characters. Do not use quotation marks around the string or the string will include the quotation marks.
courier font Terminal sessions and information the system displays appear in courier font.
< > Nonprinting characters such as passwords are in angle brackets.
[ ] Default responses to system prompts are in square brackets.
!, # An exclamation point (!) or a pound sign (#) at the beginning of a line of code indicates a comment line.
Note: Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual.
Caution: Means reader be careful. In this situation, you might perform an action that could result in equipment damage or loss of data.
Warning: IMPORTANT SAFETY INSTRUCTIONS
Means danger. You are in a situation that could cause bodily injury. Before you work on any equipment, be aware of the hazards involved with electrical circuitry and be familiar with standard practices for preventing accidents. Use the statement number provided at the end of each warning to locate its translation in the translated safety warnings that accompanied this device.
SAVE THESE INSTRUCTIONS
Regulatory: Provided for additional information and to comply with regulatory and customer requirements.
6
Ultra M Solutions Guide, Release 5.1.x
Obtaining Documentation and Submitting a Service Request
For information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service request, and gathering additional information, see New in Cisco Product Documentation at: http://www.cisco.com/c/en/us/td/docs/general/whatsnew/whatsnew.html.
Subscribe to Product Documentation, which lists all new and revised Cisco technical documentation, as an RSS feed and deliver content directly to your desktop using a reader application. The RSS feeds are a free service.
7
Ultra M Overview
Ultra M is a pre-packaged and validated virtualized mobile packet core solution designed to simplify the deployment of virtual network functions (VNFs).
The solution combines the Cisco Ultra Service Platform (USP) architecture, Cisco Validated OpenStack infrastructure, and Cisco networking and computing hardware platforms into a fully integrated and scalable stack. As such, Ultra M provides the tools to instantiate and provide basic lifecycle management for VNF components on a complete OpenStack virtual infrastructure manager.
VNF Support
In this release, Ultra M supports the Ultra Gateway Platform (UGP) VNF.
The UGP currently provides virtualized instances of the various 3G and 4G mobile packet core (MPC) gateways that enable mobile operators to offer enhanced mobile data services to their subscribers. The UGP addresses the scaling and redundancy limitations of VPC-SI (Single Instance) by extending the StarOS boundaries beyond a single VM. UGP allows multiple VMs to act as a single StarOS instance with shared interfaces, shared service addresses, load balancing, redundancy, and a single point of management.
Ultra M Models
Multiple Ultra M models are available:
■ Ultra M Small
■ Ultra M Medium
■ Ultra M Large
■ Ultra M Extra Small (XS)
These models are differentiated by their scale in terms of the number of active Service Functions (SFs) (as shown in Table 1), numbers of VNFs, and architecture.
Two architectures are supported: non-Hyper-Converged and Hyper-Converged. Table 2 identifies which architecture is supported for each Ultra M model.
Non-Hyper-Converged Ultra M models are based on OpenStack 9. This architecture implements separate Ceph Storage and Compute nodes.
Hyper-Converged models are based on OpenStack 10. The Hyper-Converged architecture combines the Ceph Storage and Compute node. The converged node is referred to as an OSD compute node.
Cisco Systems, Inc. www.cisco.com
8
Ultra M Overview
Obtaining Documentation and Submitting a Service Request
Table 1 Active Service Functions to Ultra M Model Comparison Ultra M Model # of Active Service Functions (SFs)
Ultra M XS 6
Ultra M Small 7
Ultra M Medium 10
Ultra M Large 14
Table 2 Ultra M Model Supported VNF and Architecture Variances Architecture
Non-Hyper-Converged Hyper-Converged
Single VNF Ultra M Small Ultra M XS
Ultra M Medium
Ultra M Large
Multi-VNF* NA Ultra M XS
* Multi-VNF provides support for multiple VNF configurations (up to 4 maximum). Contact your Sales or Support representative for additional information.
Functional Components
As described in Hardware Specifications, the Ultra M solution consists of multiple hardware components including multiple servers that function as controller, compute, and storage nodes. The various functional components that comprise the Ultra M are deployed on this hardware:
9
Ultra M Overview
Obtaining Documentation and Submitting a Service Request
■ OpenStack Controller: Serves as the Virtual Infrastructure Manager (VIM).
NOTE: In this release, all VNFs in a multi-
■ Ultra Automation Services (UAS): A suite of tools provided to simplify the deployment process:
— AutoIT-VNF: Provides storage and management for system ISOs.
— AutoDeploy: Initiates the deployment of the VNFM and VNF components through a single deployment script.
— AutoVNF: Initiated by AutoDeploy, AutoVNF is directly responsible for deploying the VNFM and VNF components based on inputs received from AutoDeploy.
— Ultra Web Service (UWS): The Ultra Web Service (UWS) provides a web-based graphical user interface (GUI) and a set of functional modules that enable users to manage and interact with the USP VNF.
■ Cisco Elastic Services Controller (ESC): Serves as the Virtual Network Function Manager (VNFM).
NOTE: ESC is the only VNFM supported in this release.
■ VNF Components: USP-based VNFs are comprised of multiple components providing different functions:
— Ultra Element Manager (UEM): Serves as the Element Management System (EMS, also known as the VNF-EM); it manages all of the major components of the USP-based VNF architecture.
— Control Function (CF): A central sub-system of the UGP VNF, the CF works with the UEM to performs lifecycle events and monitoring for the UGP VNF.
— Service Function (SF): Provides service context (user I/O ports), handles protocol signaling, session processing tasks, and flow control (demux).
Figure 1 - Ultra M Components
Virtual Machine Allocations
Each of the Ultra M functional components are deployed on one or more virtual machines (VMs) based on their redundancy requirements as identified in Table 3. Some of these component VMs are deployed on a single compute node as described in VM Deployment per Node Type. All deployment models use three OpenStack controllers to provide VIM layer redundancy and upgradability.
10
Ultra M Overview
Obtaining Documentation and Submitting a Service Request
Table 3 - Function VM Requirements per Ultra M Model Non-Hyper-Converged Hyper-Converged
Function(s) Small Medium Large XS Single VNF XS Multi VNF
AutoIT-VNF 1 1 1 1 1
AutoDeploy 1 1 1 1 1
AutoVNF 3 3 3 3 3 per VNF
ESC (VNFM) 2 2 2 2 2 per VNF
UEM 3 3 3 3 3 per VNF
CF 2 2 2 2 2 per VNF
SF 9 12 16 8 8 per VNF
VM Requirements
The CF, SF, UEM, and ESC VMs require the resource allocations identified in Table 4. The host resources are included in these numbers.
Table 4 - VM Resource Allocation Virtual Machine vCPU RAM (GB) Root Disk (GB)
AutoIT-VNF 4 8 80
AutoDeploy 4 8 80
AutoVNF 2 4 40
ESC 4 8 30
UEM 2 4 40
CF 8 16 6
SF 24 96 4
NOTE: 4 vCPUs, 2 GB RAM, and 54 GB root disks are reserved for host reservation.
11
Hardware Specifications
Ultra M deployments use the following hardware:
■ Cisco Catalyst Switches
■ Cisco Nexus Switches
■ UCS C-Series Servers
NOTE: The specific component software and firmware versions identified in the sections that follow have been validated in this Ultra M solution release.
Cisco Catalyst Switches
Cisco Catalyst Switches provide as physical layer 1 switching for Ultra M components to the management and provisioning networks. One of two switch models is used based on the Ultra M model being deployed:
■ Catalyst 2960XR-48TD-I Switch
■ Catalyst 3850-48T-S Switch
Catalyst 2960XR-48TD-I Switch
The Catalyst 2960XR-48TD-I has 48 10/100/1000 ports.
Table 5 - Catalyst 2960-XR Switch Information Ultra M Models Quantity Software Version Firmware Version
Ultra M Small 1 IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1
Ultra M Medium 2 IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1
Ultra M Large 2 IXOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1
Ultra M XS Single VNF 2 IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1
Ultra M XS Multi-VNF 1 per IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1 rack
Catalyst 3850-48T-S Switch
The Catalyst 3850 48T-S has 48 10/100/1000 ports.
Cisco Systems, Inc. www.cisco.com
12
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
Table 6 - Catalyst 3850-48T-S Switch Information Ultra M Models Quantity Software Version Firmware Version
Ultra M XS Single VNF 2 IOS: 03.06.06E Boot Loader: 3.58
Ultra M XS Multi-VNF 1 per IOS: 03.06.06E Boot Loader: 3.58 Rack
Cisco Nexus Switches
Cisco Nexus Swicthes serve as top-of-rack (TOR) leaf and end-of-rack (EOR) spine switches provide out-of-band (OOB) network connectivity between Ultra M components. Two switch models are used for the various Ultra M models:
■ Nexus 93180-YC-EX
■ Nexus 9236C
Nexus 93180-YC-EX
Nexus 93180 switches serve as network leafs within the Ultra M solution. Each switch has 48 10/25-Gbps Small Form Pluggable Plus (SFP+) ports and 6 40/100-Gbps Quad SFP+ (QSFP+) uplink ports.
Table 7 - Nexus 93180-YC-EX Ultra M Models Quantity Software Version Firmware Version
Ultra M Small 2 NX-OS: 7.0(3)I2(3) BIOS: 7.41
Ultra M Medium 4 NX-OS: 7.0(3)I2(3) BIOS: 7.41
Ultra M Large 4 NX-OS: 7.0(3)I2(3) BIOS: 7.41
Ultra M XS Single VNF 2 NX-OS: 7.0(3)I5(2) BIOS: 7.59
Ultra M XS Multi-VNF 2 per NX-OS: 7.0(3)I5(2) BIOS: 7.59 Rack
Nexus 9236C
Nexus 9236 switches serve as network spines within the Ultra M solution. Each switch provides 36 10/25/40/50/100 Gbps ports.
Table 8 - Nexus 9236C Ultra M Models Quantity Software Version Firmware Version
Ultra M Small 0 NX-OS: 7.0(3)I4(2) BIOS: 7.51
Ultra M Medium 2 NX-OS: 7.0(3)I4(2) BIOS: 7.51
Ultra M Large 2 NX-OS: 7.0(3)I4(2) BIOS: 7.51
13
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
Ultra M Models Quantity Software Version Firmware Version
Ultra M XS Single VNF 2 NX-OS: 7.0(3)I5(2) BIOS: 7.59
Ultra M XS Multi-VNF 2 NX-OS: 7.0(3)I5(2) BIOS: 7.59
UCS C-Series Servers
Cisco UCS C240 M4S SFF servers host the functions and virtual machines (VMs) required by Ultra M.
Server Functions and Quantities
Server functions and quantity differ depending on the Ultra M model you are deploying:
■ OpenStack Platform Director (OSP-D) / Staging Server Node: This server hosts the Undercloud function responsible for bringing up the other servers that form the Overcloud.
For non-Hyper-Converged Ult -VNF functionality in addition to the Undercloud.
For Hyper-Converged Ultra M XS Single and Multi- -
■ OpenStack Controller Nodes: These servers host the high availability (HA) cluster that serves as the VIM within the Ultra M solution. In addition, they facilitate the Ceph storage monitor function required by the Ceph Storage Nodes and/or OSD Compute Nodes.
■ Ceph Storage Nodes: Required only for non-Hyper-Converged Ultra M models, these servers are deployed in a HA cluster with each server containing a Ceph Object Storage Daemon (OSD) providing storage capacity for the VNF and other Ultra M elements.
■ OSD Compute Nodes: Required only for Hyper-Converged Ultra M models, these servers provide Ceph storage functionality in addition to hosting VMs for the following:
— AutoIT-VNF VM
— AutoDeploy VM
— AutoVNF HA cluster VMs
— Elastic Services Controller (ESC) Virtual Network Function Manager (VNFM) active and standby VMs
— Ultra Element Manager (UEM) VM HA cluster
— Ultra Service Platform (USP) Control Function (CF) active and standby VMs
■ Compute Nodes: For all Ultra M models, these servers host the active, standby, and demux USP Service Function (SF) VMs. However, for non-Hyper-Converged Ultra M models, these servers also host the VMs pertaining to the:
— AutoVNF HA cluster VMs
— Elastic Services Controller (ESC) Virtual Network Function Manager (VNFM) active and standby VMs
— Ultra Element Manager (UEM) VM HA cluster
14
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
— Ultra Service Platform (USP) Control Function (CF) active and standby VMs
Table 9 provides information on server quantity requirements per function for each Ultra M model.
Table 9 Ultra M Server Quantities by Model and Function Ultra M Server OSP-D / Controller Ceph OSD Compute Additional Models Quantity Staging Nodes Storage Compute Nodes Specifications (max) Server Node Nodes Nodes (max)
Ultra M 19 1 3 3 0 12 Based on node type as Small described in Table 10.
Ultra M 22 1 3 3 0 15 Based on node type as Medium described in Table 10.
Ultra M 26 1 3 3 0 19 Based on node type as Large described in Table 10.
Ultra M XS 15 1 3 0 3 8 Based on node type as Single VNF described in Table 11.
Ultra M XS 45 1 3 0 3* 38** Based on node type as Multi-VNF described in Table 11.
* 3 for the first VNF, 2 per each additional VNF.
** Supports a maximum of 4 VNFs.
VM Deployment per Node Type
Just as the server functions and quantities differ depending on the Ultra M model you are deploying, so does the VM distribution across those nodes as shown in Figure 2, Figure 3, and Figure 4.
15
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
Figure 2 VM Distribution on Server Nodes for Non-Hyper-Converged Ultra M Models
16
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
Figure 3 - VM Distribution on Server Nodes for Hyper-Converged Ultra M Single VNF Models
17
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
Figure 4 - VM Distribution on Server Nodes for Hyper-Converged Ultra M Multi-VNF Models
18
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
Server Configurations
Table 10 Non-Hyper-Converged Ultra M Model UCS C240 Server Specifications by Node Type Node Type CPU RAM Storage Software Firmware Version Version
Staging Server 2x 2.60 4x 32GB DDR4- 2x 1.2 TB 12G MLOM: CIMC: 2.0(10e) GHz 2400-MHz SAS HDD 4.1(1g) RDIMM/PC4 System BIOS: C240M4.2.0.10e.0.0620162114
Controller 2x 2.60 4x 32GB DDR4- 2x 1.2 TB 12G MLOM: CIMC: 2.0(10e) GHz 2400-MHz SAS HDD 4.1(1g) RDIMM/PC4 System BIOS: C240M4.2.0.10e.0.0620162114
Compute 2x 2.60 8x 32GB DDR4- 2x 1.2 TB 12G MLOM: CIMC: 2.0(10e) GHz 2400-MHz SAS HDD 4.1(1g) RDIMM/PC4 System BIOS: C240M4.2.0.10e.0.0620162114
Storage 2x 2.60 4x 32GB DDR4- 2x 300GB MLOM: CIMC: 2.0(10e) GHz 2400-MHz 12G SAS 10K 4.1(1g) RDIMM/PC4 RPM SFF HDD System BIOS: C240M4.2.0.10e.0.0620162114
Table 11 Hyper-Converged Ultra M Single and Multi-VNF UCS C240 Server Specifications by Node Type Node Type CPU RAM Storage Software Firmware Version* Version
OSP-D Server 2x 2.60 4x 32GB DDR4- 2x 1.2 TB 12G MLOM: CIMC: 3.0(3e) GHz 2400-MHz SAS HDD 4.1(3a) RDIMM/PC4 System BIOS: C240M4.3.0.3c.0.0831170228
Controller 2x 2.60 4x 32GB DDR4- 2x 1.2 TB 12G MLOM: CIMC: 3.0(3e) GHz 2400-MHz SAS HDD 4.1(3a) RDIMM/PC4 System BIOS: C240M4.3.0.3c.0.0831170228
Compute 2x 2.60 8x 32GB DDR4- 2x 1.2 TB 12G MLOM: CIMC: 3.0(3e) GHz 2400-MHz SAS HDD 4.1(3a) RDIMM/PC4 System BIOS: C240M4.3.0.3c.0.0831170228
19
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
Node Type CPU RAM Storage Software Firmware Version* Version
OSD Compute 2x 2.60 8x 32GB DDR4- 4x 1.2 TB 12G MLOM: CIMC: 3.0(3e) GHz 2400-MHz SAS HDD 4.1(3a) RDIMM/PC4 System BIOS: 2x 300G 12G C240M4.3.0.3c.0.0831170228 SAS HDD HDD
1x 480G 6G SAS SATA SSD
* The Ultra M Manager functionality includes utilities for upgrading the UCS server software (firmware). Refer to Appendix: Using the UCS Utilities Within the Ultra M Manager for more information.
Storage
Figure 5 displays the storage disk layout for the UCS C240 series servers used in the Ultra M solution.
Figure 5 UCS C240 Front-Plane
NOTES:
■ The Boot disks contain the operating system (OS) image with which to boot the server.
■ The Journal disks contain the Ceph journal file(s) used to repair any inconsistencies that may occur in the Object Storage Disks.
■ The Object Storage Disks store object data for USP-based VNFs.
20
Hardware Specifications
Obtaining Documentation and Submitting a Service Request
■ Ensure that the HDD and SSD to be used for the Boot Disk, Journal Disk, and object storage devices (OSDs) are available as per Ultra-M BoM and installed in the appropriate slots as identified in Table 12.
Table 12 - UCS C240 M4S SFF Storage Specifications by Node Type OSP-D Node and 2 x 1.2 TB HDD For Boot OS configured as Virtual Drive in RAID1 placed on Slots 1 & 2 Staging Server:
Controllers, 2 x 1.2 TB HDD For Boot OS configured as Virtual Drive in RAID1 placed on Slots 1 & 2 Computes:
Ceph Nodes and 2 x 300 GB HDD For Boot OS configured as Virtual Drive in RAID1 placed on Slots 1 & 2 OSD Computes*: 1 x 480 GB SSD For Journal Disk as Virtual Drive in RAID0 Slot 3
(Reserve for SSD Slot 3,4,5,6 future scaling needs)
4 x 1.2 TB HDD Slot 7,8,9,10
* Ceph Nodes are used in the non-Hyper-Converged Ultra M models while OSD Computes are used in Hyper- Converged Ultra M models. Refer to Server Functions and Quantities for details.
■ Ensure that the RAIDs are sized such that:
Boot Disks < Journal Disk(s) < OSDs
■ Ensure that FlexFlash is disabled on each UCS-C240 M4 (default Factory).
■ Ensure that all node Unconfigured Good state under Cisco SAS RAID Controllers (factory default).
21
Software Specifications
Table 13 - Required Software
Software Value/Description
Operating System Red Hat Enterprise Linux 7.3
Hypervisor Qemu (KVM)
VIM Non-Hyper-Converged Ultra M Models: Red Hat OpenStack Platform 9 (OSP 9 - Mitaka)
Hyper-Converged Ultra M Single and Multi-VNF Models: Red Hat OpenStack Platform 10 (OSP 10 - Newton)
VNF VPC 21.1.3
VNFM ESC 2.3.2.155
UEM UEM 5.1
USP USP 5.1.3
Cisco Systems, Inc. www.cisco.com
22
Networking Overview
This section provides information on Ultra M networking requirements and considerations.
UCS-C240 Network Interfaces
Figure 6 - UCS-C240 Back-Plane
Table 14 UCS C240 Interface Descriptions Number Designation Description Applicable Node Types
1 CIMC/IPMI/M Management network interface used for All accessing the UCS Cisco Integrated Management Controller (CIMC) application, performing Intelligent Platform Management Interface (IPMI) operations.
2 Intel Onboard Port 1: Undercloud Provisioning network interface. All
Port 2: External network interface for Internet access. OSP-D It must also be routable to External floating IP addresses on other nodes. Staging Server
3 Modular LAN on Overcloud networking interfaces used for: Motherboard (mLOM) ■ External floating IP network. Controller
■ Internal API network Controller
Cisco Systems, Inc. www.cisco.com
23
Networking Overview
Obtaining Documentation and Submitting a Service Request
Number Designation Description Applicable Node Types
■ Storage network Controller
Compute
OSD Compute
Ceph
■ Storage Management network Controller
Compute
OSD Compute
Ceph
■ Tenant network (virtio only Overcloud Controller provisioning, VNF Management, and VNF Orchestration) Compute OSD Compute
4 PCIe 4 Port 1: With NIC bonding enabled, this port provides Compute the active Service network interfaces for VNF ingress and egress connections.
Port 2: With NIC bonding enabled, this port provides Compute the standby Di-internal network interface for inter- VNF component communication. OSD Compute
5 PCIe 1 Port 1: With NIC bonding enabled, this port provides Compute the active Di-internal network interface for inter-VNF component communication. OSD Compute
Port 2: With NIC bonding enabled, this port provides Compute the standby Service network interfaces for VNF ingress and egress connections.
VIM Network Topology
-On-OpenStack") which allows OpenStack components to install a fully operational OpenStack environment. This Undercloud-Overcloud model requires multiple networks as identified in Figure 7 and Figure 8.
24
Networking Overview
Obtaining Documentation and Submitting a Service Request
Figure 7 Non-Hyper-Converged Ultra M Small, Medium, Large Model OpenStack VIM Network Topology
25
Networking Overview
Obtaining Documentation and Submitting a Service Request
Figure 8 Hyper-Converged Ultra M Single and Multi-VNF Model OpenStack VIM Network Topology
Some considerations for Undercloud and Overcloud deployment are as follows:
■ External network access (e.g. Internet access) can be configured in one of the following ways:
— Across all node types: A single subnet is configured on the Controller HA, VIP address, floating IP addresses and OSP- -center routable as well as it is able to reach the internet.
— Limited to OSP-D: The External IP network is used by Controllers for HA and Horizon dashboard as well as later on for Tenant Floating IP address requirements. This network must be data-center routable. In addition, the External IP network is used only by OSP- l interface that has a single IP address. The External IP network must be lab/data-center routable must also have internet access to Red Hat cloud. It is used by OSP-D/Staging Server for subscription purposes and also acts as an external gateway for all controllers, computes and Ceph-storage nodes.
■ IPMI must be enabled on all nodes.
■ Two networks are needed to deploy Undercloud:
— IPMI/CIMC Network
— Provisioning Network
26
Networking Overview
Obtaining Documentation and Submitting a Service Request
■ The OSP-D/Staging Server must have reachability to both IPMI/CIMC and Provisioning Networks. (Undercloud networks need to be routable between each other or have to be in one subnet.)
■ DHCP-based IP address assignment for Introspection PXE from Provisioning Network (Range A)
■ DHCP based IP address assignment for Overcloud PXE from Provisioning Network (Range B) must be separate from Introspection
■ The OSP-D/Staging Server acts as a gateway for Controller, Ceph and Computes. Therefore, the external interface of OSP-D/Staging Server needs to be able to access the Internet. In addition, this interface needs to be routable with the Data-center network. This allows the External interface IP-address of OSP-D/Staging Server to reach Data- center routable Floating ip addresses as well as the VIP addresses of Controllers in HA Mode.
■ Multiple VLANs are required in order to deploy OpenStack VIM:
— 1 for the Management and Provisioning networks interconnecting all of the nodes regardless of type
— 1 for the Staging Server/ OSP-D Node external network
— 1 for Compute, Controller, and Ceph Storage or OSD Compute Nodes
— 1 for Management network interconnecting the Leafs and Spines
■ Login to individual Compute nodes will be from OSP-D/Staging Server using heat user login credentials.
The OSP-D/Staging Server acts as a ump server where the br-ctlplane interface address is used to login to the Controller, Ceph or OSD Computes, and Computes post overcloud deployment using heat-admin credentials.
Layer 1 networking guidelines for the VIM network are provided in Layer 1 Leaf and Spine Topology. In addition, a template is provided in Appendix: Network Definitions (Layer 2 and 3) to assist you with your Layer 2 and Layer 3 network planning.
Openstack Tenant Networking
The interfaces used by the VNF are based on the PCIe architecture. Single root input/output virtualization (SR-IOV) is used on these interfaces to allow multiple VMs on a single server node to use the same network interface as shown in Figure 9. SR-IOV Networking is network type Flat under OpenStack configuration. NIC Bonding is used to ensure port level redundancy for PCIe Cards involved in SR-IOV Tenant Networks as shown in Figure 10.
27
Networking Overview
Obtaining Documentation and Submitting a Service Request
Figure 9 - Physical NIC to Bridge Mappings
Figure 10 NIC Bonding
VNF Tenant Networks
While specific VNF network requirements are described in the documentation corresponding to the VNF, Figure 11 displays the types of networks typically required by USP-based VNFs.
28
Networking Overview
Obtaining Documentation and Submitting a Service Request
Figure 11 Typical USP-based VNF Networks
The USP-based VNF networking requirements and the specific roles are described here:
■ DI-Internal: This is the DI- for CF-SF and CF-CF communications. Since this network is internal to the UGP, it does not have a gateway interface to the router in the OpenStack network topology. A unique DI internal network must be created for each instance of the UGP. The interfaces attached to these networks use performance optimizations.
■ Management: This is the local management network between the CFs and other management elements like the UEM and VNFM. This network is also used by the OSP-D Server Node to deploy the VNFM and AutoVNF. To allow external management access, an OpenStack floating IP address must be associated with the UGP VIP address. A common cf-mgmt network can be used for multiple instances of the UGP.
■ Orchestration: This is the network used for VNF deployment and monitoring. It is used by the VNFM to onboard the USP-based VNF.
■ Public: External public network. The router has an external gateway to the public network. All other networks (except DI-Internal and ServiceA-n) have an internal gateway pointing to the router. And the router performs secure network address translation (SNAT).
29
Networking Overview
Obtaining Documentation and Submitting a Service Request
■ ServiceA-n: These are the service interfaces to the SF. Up to 12 service interfaces can be provisioned for the SF with this release. The interfaces attached to these networks use performance optimizations.
Layer 1 networking guidelines for the VNF network are provided in Layer 1 Leaf and Spine Topology. In addition, a template is provided in Appendix: Network Definitions (Layer 2 and 3) to assist you with your Layer 2 and Layer 3 network planning.
Supporting Trunking on VNF Service ports
Service ports within USP-based VNFs are configured as trunk ports and traffic is tagged using the VLAN command. This configuration is supported by trunking to the uplink switch via the sriovnicswitch mechanism driver.
This driver supports Flat network types in OpenStack, enabling the guest OS to tag the packets.
Flat networks are untagged networks in OpenStack. Typically, these networks are previously existing infrastructure, where OpenStack guests can be directly applied.
Layer 1 Leaf and Spine Topology
Ultra M implements a Leaf and Spine network topology. Topology details differ between Ultra M models based on the scale and number of nodes.
NOTE: When connecting component network ports, ensure that the destination ports are rated at the same speed as the source port (e.g. connect a 10G port to a 10G port). Additionally, the source and destination ports must support the same physical medium (e.g. Ethernet) for interconnectivity.
Non-Hyper-Converged Ultra M Model Network Topology
Figure 12 illustrates the logical leaf and spine topology for the various networks required for the non-Hyper-Converged models.
30
Networking Overview
Obtaining Documentation and Submitting a Service Request
Figure 12 Non-Hyper-Converged Ultra M Model Leaf and Spine Topology
As identified in Cisco Nexus Switches, the number of leaf and spine switches differ between the Ultra M models. Similarly, the specific leaf and spine ports used also depend on the Ultra M solution model you are deploying. General guidelines for interconnecting the leaf and spine switches in the non-Hyper-Converged Large model are provided in Table 15 through Table 22. Use the information in these tables to make appropriate adjustments to your network topology based on your deployment scenario (e.g. number of Compute Nodes).
31
Networking Overview
Obtaining Documentation and Submitting a Service Request
Table 15 Catalyst Management Switch 1 Port Interconnects From To Notes Switch Port(s) Device Network Port(s)
1 Staging Management CIMC Server
2-4 Controller Management CIMC 3 sequential ports - 1 per Controller Node Nodes
5-7 Ceph Storage Management CIMC 3 sequential ports - 1 per Ceph Storage Node Nodes
8 Staging Provisioning Mgmt Server
9-11 Controller Provisioning Mgmt 3 sequential ports - 1 per Controller Node Nodes
12-14 Ceph Storage Provisioning Mgmt 3 sequential ports - 1 per Ceph Storage Node Nodes
15 Catalyst InterLink 39 Management Switch 2
23 Staging External Intel Onboard Server (OSP- Port 2 (LAN 2) D)
24 External External Router
43 Leaf 1 InterLink 48
44 Leaf 2 InterLink 48
45 Leaf 1 Management Mgmt 0
46 Leaf 2 Management Mgmt 0
47 Spine 1 Management Mgmt 0
Table 16 Catalyst Management Switch 2 Port Interconnects From To Notes Switch Port(s) Device Network Port(s)
1-19 Compute Management CIMC 19 sequential ports - 1 per Compute Node Nodes
32
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Switch Port(s) Device Network Port(s)
20-38 Provisioning Management Mgmt 19 sequential ports - 1 per Compute Node
39 Catalyst InterLink 15 Management Switch 1
45 Leaf 3 Management Mgmt 0
46 Leaf 4 Management Mgmt 0
47 Spine 2 Management Mgmt 0
Table 17 - Leaf 1 Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
2-4 Controller Management & MLOM 1 3 sequential ports - 1 per Controller Node Orchestration (active)
5-7 Ceph Storage Management & MLOM 1 3 sequential ports - 1 per Ceph Storage Node Nodes Orchestration (active)
8-26 Compute Management & MLOM 1 19 sequential ports 1 per Compute Node Nodes Orchestration (active)
48 Catalyst Management & 43 Manaagement Orchestration Switch 1
49-51 Spine 1 Uplink 1-3 Leaf 1 port 49 connects to Spine 1 port 1
Leaf 1 port 50 connects to Spine 1 port 2
Leaf 1 port 51 connects to Spine 1 port 3
52-54 Spine 2 Uplink 1-3 Leaf 1 port 52 connects to Spine 2 port 1
Leaf 1 port 53 connects to Spine 2 port 2
Leaf 1 port 54 connects to Spine 2 port 3
33
Networking Overview
Obtaining Documentation and Submitting a Service Request
Table 18 - Leaf 2 Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
2-4 Controller Management & MLOM 2 3 sequential ports - 1 per Controller Node Orchestration (redundant)
5-7 Ceph Storage Management & MLOM 2 3 sequential ports - 1 per Ceph Storage Node Nodes Orchestration (redundant)
8-26 Compute Management & MLOM 2 19 sequential ports 1 per Compute Node Nodes Orchestration (redundant)
48 Catalyst Management & 44 Management Orchestration Switch 1
49-51 Spine 1 Uplink 4-6 Leaf 1 port 49 connects to Spine 1 port 4
Leaf 1 port 50 connects to Spine 1 port 5
Leaf 1 port 51 connects to Spine 1 port 6
52-54 Spine 2 Uplink 4-6 Leaf 1 port 52 connects to Spine 2 port 4
Leaf 1 port 53 connects to Spine 2 port 5
Leaf 1 port 54 connects to Spine 2 port 6
Table 19 - Leaf 3 Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
1-37 odd Compute Di-internal PCIe01 1 19 ports - 1 per Compute Node Nodes (active)
2-28 Compute Service PCIe04 1 19 ports - 1 per Compute Node even Nodes (active)
49-51 Spine 1 Uplink 7-9 Leaf 1 port 49 connects to Spine 1 port 7
Leaf 1 port 50 connects to Spine 1 port 8
Leaf 1 port 51 connects to Spine 1 port 9
34
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
52-54 Spine 2 Uplink 7-9 Leaf 1 port 52 connects to Spine 2 port 7
Leaf 1 port 53 connects to Spine 2 port 8
Leaf 1 port 54 connects to Spine 2 port 9
Table 20 - Leaf 4 Port Interconnect From To Notes Leaf Port(s) Device Network Port(s)
1-37 odd Compute Service PCIe01 2 19 ports - 1 per Compute Node Nodes (redundant)
2-28 Compute Di-internal PCIe04 2 19 ports - 1 per Compute Node even Nodes (redundant)
49-51 Spine 1 Uplink 10-12 Leaf 1 port 49 connects to Spine 1 port 10
Leaf 1 port 50 connects to Spine 1 port 11
Leaf 1 port 51 connects to Spine 1 port 12
52-54 Spine 2 Uplink 10-12 Leaf 1 port 52 connects to Spine 2 port 10
Leaf 1 port 53 connects to Spine 2 port 11
Leaf 1 port 54 connects to Spine 2 port 12
Table 21 - Spine 1 Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
1-3 Leaf 1 Uplink 49-51 Spine 1 port 1 connects to Leaf 1 port 49
Spine 1 port 2 connects to Leaf 1 port 50
Spine 1 port 3 connects to Leaf 1 port 51
4-6 Leaf 2 Uplink 49-51 Spine 1 port 4 connects to Leaf 2 port 49
Spine 1 port 5 connects to Leaf 2 port 50
Spine 1 port 6 connects to Leaf 2 port 51
35
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
7-9 Leaf 3 Uplink 49-51 Spine 1 port 7 connects to Leaf 3 port 49
Spine 1 port 8 connects to Leaf 3 port 50
Spine 1 port 9 connects to Leaf 3 port 51
10-12 Leaf 4 Uplink 49-51 Spine 1 port 10 connects to Leaf 4 port 49
Spine 1 port 11 connects to Leaf 4 port 50
Spine 1 port 12 connects to Leaf 4 port 51
20-31 Spine 2 InterLink 20-31 Spine 1 port 20 connects to Spine 2 port 20
Spine 1 port 21 connects to Spine 2 port 21
Spine 1 port 22 connects to Spine 2 port 22
Spine 1 port 31 connects to Spine 2 port 31
Table 22 - Spine 2 Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
1-3 Leaf 1 Uplink 49-51 Spine 2 port 1 connects to Leaf 1 port 49
Spine 2 port 2 connects to Leaf 1 port 50
Spine 2 port 3 connects to Leaf 1 port 51
4-6 Leaf 2 Uplink 49-51 Spine 2 port 4 connects to Leaf 2 port 49
Spine 2 port 5 connects to Leaf 2 port 50
Spine 2 port 6 connects to Leaf 2 port 51
7-9 Leaf 3 Uplink 49-51 Spine 2 port 7 connects to Leaf 3 port 49
Spine 2 port 8 connects to Leaf 3 port 50
Spine 2 port 9 connects to Leaf 3 port 51
10-12 Leaf 4 Uplink 49-51 Spine 2 port 10 connects to Leaf 4 port 49
Spine 2 port 11 connects to Leaf 4 port 50
Spine 2 port 12 connects to Leaf 4 port 51
36
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
20-31 Spine 2 InterLink 20-31 Spine 2 port 20 connects to Spine 1 port 20
Spine 2 port 21 connects to Spine 1 port 21
Spine 2 port 22 connects to Spine 1 port 22
Spine 2 port 31 connects to Spine 1 port 31
Hyper-Converged Ultra M Single and Multi-VNF Model Network Topology
Figure 13 illustrates the logical leaf and spine topology for the various networks required for the Hyper-Converged Ultra M Single and Multi-VNF models.
37
Networking Overview
Obtaining Documentation and Submitting a Service Request
Figure 13 Hyper-Converged Ultra M Single and Multi-VNF Leaf and Spine Topology
As identified in Cisco Nexus Switches, the number of leaf and spine switches differ between the Ultra M models. Similarly, the specific leaf and spine ports used also depend on the Ultra M solution model being deployed. That said, general guidelines for interconnecting the leaf and spine switches in a Ultra M XS multi-VNF deployment are provided in Table 23 through Table 32. Using the information in these tables, you can make appropriate adjustments to your network topology based on your deployment scenario (e.g. number of VNFs and number of Compute Nodes).
38
Networking Overview
Obtaining Documentation and Submitting a Service Request
Table 23 - Catalyst Management Switch 1 (Rack 1) Port Interconnects From To Notes Switch Port(s) Device Network Port(s)
1, 2, 11 OSD Management CIMC 3 non-sequential ports - 1 per OSD Compute Compute Node Nodes
3-10 Compute Management CIMC 6 sequential ports - 1 per Compute Node Nodes
12 OSP-D Management CIMC Management Switch 1 only Server
13 Controller 0 Management CIMC
21, 22, OSD Provisioning Mgmt 3 non-sequential ports - 1 per OSD Compute 31 Compute Node Nodes
23-30 Compute Provisioning Mgmt 6 sequential ports - 1 per Compute Node Nodes
32-33 OSP-D Provisioning Mgmt 2 sequential ports Server
34 Controller 0 Management CIMC
47 Leaf 1 Management 48 Switch port 47 connects with Leaf 1 port 48
48 Leaf 2 Management 48 Switch port 48 connects with Leaf 2 port 48
Table 24 - Catalyst Management Switch 2 (Rack 2) Port Interconnects From To Notes Switch Port(s) Device Network Port(s)
1-10 Compute Management CIMC 10 sequential ports - 1 per Compute Node Nodes
14 Controller 1 Management CIMC
15 Controller 2 Management CIMC
21-30 Compute Provisioning Mgmt 10 sequential ports - 1 per Compute Node Nodes
39
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Switch Port(s) Device Network Port(s)
35 Controller 1 Provisioning Mgmt
36 Controller 2 Provisioning Mgmt
47 Leaf 3 Management 48 Switch port 47 connects with Leaf 3 port 48
48 Leaf 4 Management 48 Switch port 48 connects with Leaf 4 port 48
Table 25 - Catalyst Management Switch 3 (Rack 3) Port Interconnects From To Notes Switch Port(s) Device Network Port(s)
1-10 Compute Management CIMC 10 sequential ports - 1 per Compute Node Nodes
21-30 Compute Provisioning Mgmt 10 sequential ports - 1 per Compute Node Nodes
47 Leaf 5 Management 48 Switch port 47 connects with Leaf 5 port 48
48 Leaf 6 Management 48 Switch port 48 connects with Leaf 6 port 48
Table 26 - Catalyst Management Switch 4 (Rack 4) Port Interconnects From To Notes Switch Port(s) Device Network Port(s)
1-10 Compute Management CIMC 10 sequential ports - 1 per Compute Node Nodes
21-30 Compute Provisioning Mgmt 10 sequential ports - 1 per Compute Node Nodes
47 Leaf 7 Management 48 Switch port 47 connects with Leaf 7 port 48
48 Leaf 8 Management 48 Switch port 48 connects with Leaf 8 port 48
40
Networking Overview
Obtaining Documentation and Submitting a Service Request
Table 27 Leaf 1 and 2 (Rack 1) Port Interconnects* From To Notes Leaf Port(s) Device Network Port(s)
Leaf 1
1, 2, 11 OSD Management & MLOM P1 3 non-sequential ports - 1 per OSD Compute Compute Orchestration Node Nodes (active)
12 Controller 0 Management & MLOM P1 Node Orchestration (active)
17, 18, OSD Di-internal PCIe01 P1 3 non-sequential ports - 1 per OSD Compute 27 Compute (active) Node Nodes
3 - 10 Compute Management & MLOM P1 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (active)
19-26 Compute Di-internal PCIe01 P1 Sequential ports based on the number of (inclusive) Nodes (active) Compute Nodes - 1 per Compute Node
33-42 Compute Service PCIe04 P1 Sequential ports based on the number of (inclusive) Nodes/ OSD (active) Compute Nodes and/or OSD Compute Nodes - Compute 1 per OSD Compute Node and/or Compute Nodes Node
NOTE: Though the OSD Compute Nodes do not
ensure compatability within the OpenStack Overcloud deployment.
48 Catalyst Management 47 Leaf 1 connects to Switch 1 Management Switches
49-50 Spine 1 Downlink 1-2 Leaf 1 port 49 connects to Spine 1 port 1
Leaf 1 port 50 connects to Spine 1 port 2
51-52 Spine 2 Downlink 3-4 Leaf 1 port 51 connects to Spine 2 port 3
Leaf 1 port 52 connects to Spine 2 port 4
Leaf 2
1, 2, 11 OSD Management & MLOM P2 3 non-sequential ports - 1 per OSD Compute Compute Orchestration Node Nodes (redundant)
41
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
12 Controller 0 Management & MLOM P2 Node Orchestration (redundant)
17, 18, OSD Di-internal PCIe04 P2 3 non-sequential ports - 1 per OSD Compute 27 Compute (redundant) Node Nodes
3 - 10 Compute Management & MLOM P2 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (redundant)
19-26 Compute Di-internal PCIe04 P2 Sequential ports based on the number of (inclusive) Nodes (redundant) Compute Nodes - 1 per Compute Node
33-42 Compute Service PCIe01 P2 Sequential ports based on the number of (inclusive) Nodes / OSD (redundant) Compute Nodes and/or OSD Compute Nodes - Compute 1 per OSD Compute Node and/or Compute Nodes Node
NOTE: Though the OSD Compute Nodes do not
ensure compatability within the OpenStack Overcloud deployment.
48 Catalyst Management 48 Leaf 2 connects to Switch 1 Management Switches
49-50 Spine 1 Downlink 1-2 Leaf 2 port 49 connects to Spine 1 port 1
Leaf 2 port 50 connects to Spine 1 port 2
51-52 Spine 2 Downlink 3-4, 7-8, Leaf 2 port 51 connects to Spine 2 port 3 11-12, 15-16 Leaf 2 port 52 connects to Spine 2 port 4
Table 28 - Leaf 3 and 4 (Rack 2) Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
Leaf 3
42
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
1 - 10 Compute Management & MLOM P1 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (active) NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1 (Rack 1). These are are used to host management-related VMs as shown in Figure 4.
13-14 Controller Management & MLOM P1 Leaf 3 port 13 connects to Controller 1 MLOM (inclusive) Nodes Orchestration P1 port (active) Leaf 3 port 14 connects to Controller 1 MLOM P1 port
17-26 Compute Di-internal PCIe01 P1 Sequential ports based on the number of (inclusive) Nodes (active) Compute Nodes - 1 per Compute Node
NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
33-42 Compute Service PCIe04 P1 Sequential ports based on the number of (inclusive) Nodes (active) Compute Nodes - 1 per Compute Node
48 Catalyst Management 47 Leaf 3 connects to Switch 2 Management Switches
49-50 Spine 1 Downlink 5-6 Leaf 3 port 49 connects to Spine 1 port 5
Leaf 3 port 50 connects to Spine 1 port 6
51-52 Spine 2 Downlink 7-8 Leaf 3 port 51 connects to Spine 2 port 7
Leaf 3 port 52 connects to Spine 2 port 8
Leaf 4
1 - 10 Compute Management & MLOM P2 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (redundant) NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
43
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
13-14 Controller Management & MLOM P2 Leaf 4 port 13 connects to Controller 1 MLOM (inclusive) Nodes Orchestration P2 port (redundant) Leaf 4 port 14 connects to Controller 1 MLOM P2 port
17-26 Compute Di-internal PCIe04 P2 Sequential ports based on the number of (inclusive) Nodes (redundant) Compute Nodes - 1 per Compute Node
NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
33-42 Compute Service PCIe01 P2 Sequential ports based on the number of (inclusive) Nodes (redundant) Compute Nodes - 1 per Compute Node
48 Catalyst Management 48 Leaf 4 connects to Switch 2 Management Switches
49-50 Spine 1 Downlink 5-6 Leaf 4 port 49 connects to Spine 1 port 5
Leaf 4 port 50 connects to Spine 1 port 6
51-52 Spine 2 Downlink 7-8 Leaf 4 port 51 connects to Spine 2 port 7
Leaf 4 port 52 connects to Spine 2 port 8
Table 29 - Leaf 5 and 6 (Rack 3) Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
Leaf 5
1 - 10 Compute Management & MLOM P1 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (active) NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
44
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
17-26 Compute Di-internal PCIe01 P1 Sequential ports based on the number of (inclusive) Nodes (active) Compute Nodes - 1 per Compute Node
NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
33-42 Compute Service PCIe04 P1 Sequential ports based on the number of (inclusive) Nodes (active) Compute Nodes - 1 per Compute Node
48 Catalyst Management 47 Leaf 5 connects to Switch 3 Management Switches
49-50 Spine 1 Downlink 9-10 Leaf 5 port 49 connects to Spine 1 port 9
Leaf 5 port 50 connects to Spine 1 port 10
51-52 Spine 2 Downlink 3-4, 7-8, Leaf 5 port 51 connects to Spine 2 port 11 11-12, 15-16 Leaf 5 port 52 connects to Spine 2 port 12
Leaf 6
1 - 10 Compute Management & MLOM P2 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (redundant) NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
17-26 Compute Di-internal PCIe04 P2 Sequential ports based on the number of (inclusive) Nodes (redundant) Compute Nodes - 1 per Compute Node
NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
33-42 Compute Service PCIe01 P2 Sequential ports based on the number of (inclusive) Nodes (redundant) Compute Nodes - 1 per Compute Node
48 Catalyst Management 48 Leaf 6 connects to Switch 3 Management Switches
49-50 Spine 1 Downlink 9-10 Leaf 6 port 49 connects to Spine 1 port 9
Leaf 6 port 50 connects to Spine 1 port 10
45
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
51-52 Spine 2 Downlink 11-12 Leaf 6 port 51 connects to Spine 2 port 11
Leaf 6 port 52 connects to Spine 2 port 12
Table 30 - Leaf 7 and 8 (Rack 4) Port Interconnects From To Notes Leaf Port(s) Device Network Port(s)
Leaf 7
1 - 10 Compute Management & MLOM P1 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (active) NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
17-26 Compute Di-internal PCIe01 P1 Sequential ports based on the number of (inclusive) Nodes (active) Compute Nodes - 1 per Compute Node
NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
33-42 Compute Service PCIe04 P1 Sequential ports based on the number of (inclusive) Nodes (active) Compute Nodes - 1 per Compute Node
48 Catalyst Management 47 Leaf 7 connects to Switch 4 Management Switches
49-50 Spine 1 Downlink 13-14 Leaf 7 port 49 connects to Spine 1 port 13
Leaf 7 port 50 connects to Spine 1 port 14
51-52 Spine 2 Downlink 15-16 Leaf 7 port 51 connects to Spine 2 port 15
Leaf 7 port 52 connects to Spine 2 port 16
Leaf 8
46
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Leaf Port(s) Device Network Port(s)
1 - 10 Compute Management & MLOM P2 Sequential ports based on the number of (inclusive) Nodes Orchestration Compute Nodes - 1 per Compute Node (redundant) NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
17-26 Compute Di-internal PCIe04 P2 Sequential ports based on the number of (inclusive) Nodes (redundant) Compute Nodes - 1 per Compute Node
NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are are used to host management-related VMs as shown in Figure 4.
33-42 Compute Service PCIe01 P2 Sequential ports based on the number of (inclusive) Nodes (redundant) Compute Nodes - 1 per Compute Node
48 Catalyst Management 48 Leaf 8 connects to Switch 4 Management Switches
49-50 Spine 1 Downlink 13-14 Leaf 8 port 49 connects to Spine 1 port 13
Leaf 8 port 50 connects to Spine 1 port 14
51-52 Spine 2 Downlink 15-16 Leaf 8 port 51 connects to Spine 2 port 15
Leaf 8 port 52 connects to Spine 2 port 16
Table 31 - Spine 1 Port Interconnect Guidelines From To Notes Spine Port(s) Device Network Port(s)
1-2, Leaf 1, 3, 5, 7 Downlink 49-50 Spine 1 ports 1 and 2 connect to Leaf 1 ports 49 5-6, and 50 9-10, 13-14 Spine 1 ports 5 and 6 connect to Leaf 3 ports 49 and 50
Spine 1 ports 9 and 10 connect to Leaf 5 ports 49 and 50
Spine 1 ports 13 and 14 connect to Leaf 7 ports 49 and 50
47
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Spine Port(s) Device Network Port(s)
3-4, Leaf 2, 4, 6, 8 Downlink 49-50 Spine 1 ports 3 and 4 connect to Leaf 2 ports 49 7-8, and 50 11-12, 15-16 Spine 1 ports 7 and 8 connect to Leaf 4 ports 49 and 50
Spine 1 ports 11 and 12 connect to Leaf 6 ports 49 and 50
Spine 1 ports 15 and 16 connect to Leaf 8 ports 49 and 50
29-30, Spine 2 Interlink 29-30, Spine 1 ports 29-30 connect to Spine 2 ports 31, 32, 31, 32, 29-30 33-34 33-34 Spine 1 port 31 connects to Spine 2 port 31
Spine 1 port 32 connects to Spine 2 port 32
Spine 1 ports 33-34 connect to Spine 2 ports 33-34
21-22, Router Uplink - 23-24, 25-26
Table 32 - Spine 2 Port Interconnect Guidelines From To Notes Spine Port(s) Device Network Port(s)
1-2, Leaf 1, 3, 5, 7 Downlink 51-52 Spine 1 ports 1 and 2 connect to Leaf 1 ports 51 5-6, and 52 9-10, 13-14 Spine 1 ports 5 and 6 connect to Leaf 3 ports 51 and 52
Spine 1 ports 9 and 10 connect to Leaf 5 ports 51 and 52
Spine 1 ports 13 and 14 connect to Leaf 7 ports 51 and 52
48
Networking Overview
Obtaining Documentation and Submitting a Service Request
From To Notes Spine Port(s) Device Network Port(s)
3-4, Leaf 2, 4, 6, 8 Downlink 51-52 Spine 1 ports 3 and 4 connect to Leaf 2 ports 51 7-8, and 52 11-12, 15-16 Spine 1 ports 7 and 8 connect to Leaf 4 ports 51 and 52
Spine 1 ports 11 and 12 connect to Leaf 6 ports 51 and 52
Spine 1 ports 15 and 16 connect to Leaf 8 ports 51 and 52
29-30, Spine 1 Interconnect 29-30, Spine 2 ports 29-30 connect to Spine 1 ports 31, 32, 31, 32, 29-30 33-34 33-34 Spine 2 port 31 connects to Spine 1 port 31
Spine 2 port 32 connects to Spine 1 port 32
Spine 2 ports 33-34 connect to Spine 1 ports 33-34
21-22, Router Uplink - 23-24, 25-26
49
Deploying the Ultra M Soution
Ultra M is a multi-product solution. Detailed instructions for installing each of these products is beyond the scope of this document. Instead, the sections that follow identify the specific, non-default parameters that must be configured through the installation and deployment of those products in order to deploy the entire solution.
Deployment Workflow
Figure 14 Ultra M Deployment Workflow
Plan Your Deployment
Before deploying the Ultra M solution, it is very important to develop and plan your deployment.
Network Planning
Networking Overview provides a general overview and identifies basic requirements for networking the Ultra M solution.
With this background, use the tables in Appendix: Network Definitions (Layer 2 and 3) to help plan the details of your network configuration.
Install and Cable the Hardware
This section describes the procedure to install all the components included in the Ultra M Solution.
Related Documentation
Use the information and instructions to found in the installation documentation for the respective hardware components to properly install the Ultra M solution hardware. Refer to the following installation guides for more information:
Cisco Systems, Inc. www.cisco.com
50
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Catalyst 2960-XR Switch
http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960xr/hardware/installation/guide/b_c2960xr_hig.html
Catalyst 3850 48T-S Switch
http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3850/hardware/installation/guide/b_c3850_hig.html
Nexus 93180-YC 48 Port
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/hw/n93180ycex_hig/guide/b_n93180ycex_nxo s_mode_hardware_install_guide.html
Nexus 9236C 36 Port
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/hw/n9236c_hig/guide/b_c9236c_nxos_mode_ hardware_install_guide.html
UCS C240 M4S Server
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/hw/C240M4/install/C240M4.html
Rack Layout
Non-Hyper-Converged Ultra M Small Deployment Rack Configuration
Table 33 provides details for the recommended rack layout for the non-Hyper-Converged Ultra M Small deployment model.
Table 33 - Non-Hyper-Converged Ultra M Small Deployment Rack Unit Rack #1
1 Mgmt Switch: Catalyst 2960-48TD-I
2 Empty
3 Leaf TOR Switch A: Nexus 93180-YC-EX
4 Leaf TOR Switch A: Nexus 93180-YC-EX
5/6 Staging Server: UCS C240 M4S SFF
7/8 Controller Node A: UCS C240 M4S SFF
9/10 Controller Node B: UCS C240 M4S SFF
11/12 Controller Node C: UCS C240 M4S SFF
13/14 UEM VM A: UCS C240 M4S SFF
15/16 UEM VM B: UCS C240 M4S SFF
17/18 UEM VM C: UCS C240 M4S SFF
19/20 Demux SF VM: UCS C240 M4S SFF
51
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Rack Unit Rack #1
21/22 Standby SF VM: UCS C240 M4S SFF
23/24 Active SF VM 1: UCS C240 M4S SFF
25/26 Active SF VM 2: UCS C240 M4S SFF
27/28 Active SF VM 3: UCS C240 M4S SFF
29/30 Active SF VM 4: UCS C240 M4S SFF
31/32 Active SF VM 5: UCS C240 M4S SFF
33/34 Active SF VM 6: UCS C240 M4S SFF
35/36 Active SF VM 7: UCS C240 M4S SFF
37/38 Ceph Node A: UCS C240 M4S SFF
39/40 Ceph Node B: UCS C240 M4S SFF
41/42 Ceph Node C: UCS C240 M4S SFF
Non-Hyper-Converged Ultra M Medium Deployment
Table 34 provides details for the recommended rack layout for the non-Hyper-Converged Ultra M Medium deployment model.
Table 34 - Non-Hyper-Converged Ultra M Medium Deployment Rack Unit Rack #1 Rack #2
1 Mgmt Switch: Catalyst 2960-48TD-I Mgmt Switch: Catalyst 2960-48TD-I
2 Spine EOR Switch A: Nexus 9236C Spine EOR Switch A: Nexus 9236C
Leaf TOR Switch A: Nexus 93180-YC- Leaf TOR Switch A: Nexus 93180-YC- 3 EX EX
Leaf TOR Switch A: Nexus 93180-YC- Leaf TOR Switch A: Nexus 93180-YC- 4 EX EX
5/6 Empty UEM VM A: UCS C240 M4S SFF
7/8 Staging Server: UCS C240 M4S SFF UEM VM B: UCS C240 M4S SFF
Controller Node A: UCS C240 M4S UEM VM C: UCS C240 M4S SFF 9/10 SFF
Controller Node B: UCS C240 M4S Demux SF VM: UCS C240 M4S SFF 11/12 SFF
52
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Rack Unit Rack #1 Rack #2
Controller Node C: UCS C240 M4S Standby SF VM: UCS C240 M4S SFF 13/14 SFF
15/16 Empty Active SF VM 1: UCS C240 M4S SFF
17/18 Empty Active SF VM 2: UCS C240 M4S SFF
19/20 Empty Active SF VM 3: UCS C240 M4S SFF
21/22 Empty Active SF VM 4: UCS C240 M4S SFF
23/24 Empty Active SF VM 5: UCS C240 M4S SFF
25/26 Empty Active SF VM 6: UCS C240 M4S SFF
27/28 Empty Active SF VM 7: UCS C240 M4S SFF
29/30 Empty Active SF VM 8: UCS C240 M4S SFF
31/32 Empty Active SF VM 9: UCS C240 M4S SFF
33/34 Empty Active SF VM 10: UCS C240 M4S SFF
35/36 Empty Empty
37/38 Ceph Node A: UCS C240 M4S SFF Empty
39/40 Ceph Node B: UCS C240 M4S SFF Empty
41/42 Ceph Node C: UCS C240 M4S SFF Empty
Non-Hyper-Converged Ultra M Large Deployment
Table 35 provides details for the recommended rack layout for the non-Hyper-Converged Ultra M Large deployment model.
Table 35 - Non-Hyper-Converged Ultra M Large Deployment Rack Unit Rack #1 Rack #2
1 Mgmt Switch: Catalyst 2960-48TD-I Mgmt Switch: Catalyst 2960-48TD-I
2 Spine EOR Switch A: Nexus 9236C Spine EOR Switch A: Nexus 9236C
Leaf TOR Switch A: Nexus 93180-YC- Leaf TOR Switch A: Nexus 93180- 3 EX YC-EX
Leaf TOR Switch A: Nexus 93180-YC- Leaf TOR Switch A: Nexus 93180- 4 EX YC-EX
5/6 Empty UEM VM A: UCS C240 M4S SFF
53
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Rack Unit Rack #1 Rack #2
7/8 Staging Server: UCS C240 M4S SFF UEM VM B: UCS C240 M4S SFF
9/10 Controller Node A: UCS C240 M4S SFF UEM VM C: UCS C240 M4S SFF
11/12 Controller Node B: UCS C240 M4S SFF Demux SF VM: UCS C240 M4S SFF
Standby SF VM: UCS C240 M4S Controller Node C: UCS C240 M4S SFF 13/14 SFF
Active SF VM 1: UCS C240 M4S 15/16 Empty SFF
Active SF VM 2: UCS C240 M4S 17/18 Empty SFF
Active SF VM 3: UCS C240 M4S 19/20 Empty SFF
Active SF VM 4: UCS C240 M4S 21/22 Empty SFF
Active SF VM 5: UCS C240 M4S 23/24 Empty SFF
Active SF VM 6: UCS C240 M4S 25/26 Empty SFF
Active SF VM 7: UCS C240 M4S 27/28 Empty SFF
Active SF VM 8: UCS C240 M4S 29/30 Empty SFF
Active SF VM 9: UCS C240 M4S 31/32 Empty SFF
Active SF VM 10: UCS C240 M4S 33/34 Empty SFF
Active SF VM11: UCS C240 M4S 35/36 Empty SFF
Active SF VM 12: UCS C240 M4S Ceph Node A: UCS C240 M4S SFF 37/38 SFF
Active SF VM 13: UCS C240 M4S Ceph Node B: UCS C240 M4S SFF 39/40 SFF
Active SF VM 14: UCS C240 M4S Ceph Node C: UCS C240 M4S SFF 41/42 SFF
54
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Hyper-Converged Ultra M XS Single VNF Deployment
Table 36 provides details for the recommended rack layout for the Hyper-Converged Ultra M XS Single VNF deployment model.
Table 36 - Hyper-Converged Ultra M XS Single VNF Deployment Rack Layout Rack #1 Rack #2
RU-1 Empty Empty
RU-2 Spine EOR Switch A: Nexus 9236C Spine EOR Switch B: Nexus 9236C
RU-3 Empty Empty
VNF Mgmt Switch: Catalyst3850-48T-S RU-4 or Catalyst 2960XR-48TD Empty
VNF Leaf TOR Switch A: Nexus RU-5 93180YC-EX Empty
VNF Leaf TOR Switch B: Nexus RU-6 93180YC-EX Empty
RU-7/8 Ultra VNF-EM 1A: UCS C240 M4S SFF Empty
RU-9/10 Ultra VNF-EM 1B: UCS C240 M4S SFF Empty
RU-11/12 Empty Empty
RU-13/14 Demux SF: UCS C240 M4S SFF Empty
RU-15/16 Standby SF: UCS C240 M4S SFF Empty
RU-17/18 Active SF 1: UCS C240 M4S SFF Empty
RU-19/20 Active SF 2: UCS C240 M4S SFF Empty
RU-21/22 Active SF 3: UCS C240 M4S SFF Empty
RU-23/24 Active SF 4: UCS C240 M4S SFF Empty
RU-25/26 Active SF 5: UCS C240 M4S SFF Empty
RU-27/28 Active SF 6: UCS C240 M4S SFF Empty
RU-29/30 Empty Empty
RU-31/32 Empty Empty
RU-33/34 Empty Empty
OpenStack Control C: UCS C240 M4S RU-35/36 Ultra VNF-EM 1C SFF
RU-37/38 Staging Server: UCS C240 M4S SFF Empty
55
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Rack #1 Rack #2
OpenStack Control A: UCS C240 M4S OpenStack Control B: UCS C240 M4S RU-39/40 SFF SFF
RU-41/42 Empty Empty
Cables Controller Rack Cables Controller Rack Cables
Cables Spine Uplink/Interconnect Cables Spine Uplink/Interconnect Cables
Cables Leaf TOR To Spine Uplink Cables Empty
Cables VNF Rack Cables Empty
Hyper-Converged Ultra M XS Multi-VNF Deployment
Table 37 provides details for the recommended rack layout for the Hyper-Converged Ultra M XS Multi-VNF deployment model.
Table 37 - Hyper-Converged Ultra M XS Multi-VNF Deployment Rack Layout Rack #1 Rack #2 Rack #3 Rack #4
RU-1 Empty Empty Empty Empty
Spine EOR Switch A: Spine EOR Switch B: RU-2 Nexus 9236C Nexus 9236C Empty Empty
RU-3 Empty Empty Empty Empty
VNF Mgmt Switch: VNF Mgmt Switch: C VNF Mgmt Switch: VNF Mgmt Switch: Catalyst3850-48T-S Catalyst3850-48T-S Catalyst3850-48T-S Catalyst3850-48T-S RU-4 or Catalyst 2960XR- or Catalyst 2960XR- or Catalyst 2960XR- or Catalyst 2960XR- 48TD 48TD 48TD 48TD
VNF Leaf TOR VNF Leaf TOR VNF Leaf TOR VNF Leaf TOR RU-5 Switch A: Nexus Switch A: Nexus Switch A: Nexus Switch A: Nexus 93180YC-EX 93180YC-EX 93180YC-EX 93180YC-EX
VNF Leaf TOR VNF Leaf TOR VNF Leaf TOR VNF Leaf TOR Switch B: Nexus Switch B: Nexus Switch B: Nexus Switch B: Nexus RU-6 93180YC-EX 93180YC-EX 93180YC-EX 93180YC-EX
Ultra VNF-EM 1A: Ultra VNF-EM 2A: Ultra VNF-EM 3A: Ultra VNF-EM 4A: RU-7/8 UCS C240 M4S SFF UCS C240 M4S SFF UCS C240 M4S SFF UCS C240 M4S SFF
Ultra VNF-EM 1B: Ultra VNF-EM 2B: Ultra VNF-EM 3B: Ultra VNF-EM 4B: RU-9/10 UCS C240 M4S SFF UCS C240 M4S SFF UCS C240 M4S SFF UCS C240 M4S SFF
RU-11/12 Empty Empty Empty Empty
56
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Demux SF: UCS Demux SF: UCS Demux SF: UCS Demux SF: UCS RU-13/14 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
Standby SF: UCS Standby SF: UCS Standby SF: UCS Standby SF: UCS RU-15/16 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
Active SF 1: UCS Active SF 1: UCS Active SF 1: UCS Active SF 1: UCS RU-17/18 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
Active SF 2: UCS Active SF 2: UCS Active SF 2: UCS Active SF 2: UCS RU-19/20 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
Active SF 3: UCS Active SF 3: UCS Active SF 3: UCS Active SF 3: UCS RU-21/22 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
Active SF 4: UCS Active SF 4: UCS Active SF 4: UCS Active SF 4: UCS RU-23/24 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
Active SF 5: UCS Active SF 5: UCS Active SF 5: UCS Active SF 5: UCS RU-25/26 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
Active SF 6: UCS Active SF 6: UCS Active SF 6: UCS Active SF 6: UCS RU-27/28 C240 M4S SFF C240 M4S SFF C240 M4S SFF C240 M4S SFF
RU-29/30 Empty Empty Empty Empty
RU-31/32 Empty Empty Empty Empty
RU-33/34 Empty Empty Empty Empty
OpenStack Control Ultra VNF-EM C: UCS C240 M4S RU-35/36 1C,2C,3C,4C SFF Empty Empty
Staging Server: UCS RU-37/38 C240 M4S SFF Empty Empty Empty
OpenStack Control OpenStack Control A: UCS C240 M4S B: UCS C240 M4S RU-39/40 SFF SFF Empty Empty
RU-41/42 Empty Empty Empty Empty
Controller Rack Controller Rack Controller Rack Cables Cables Cables Cables Empty
Spine Spine Uplink/Interconnect Uplink/Interconnect Cables Cables Cables Empty Empty
Leaf TOR To Spine Leaf TOR To Spine Leaf TOR To Spine Leaf TOR To Spine Cables Uplink Cables Uplink Cables Uplink Cables Uplink Cables
Cables VNF Rack Cables VNF Rack Cables VNF Rack Cables VNF Rack Cables
57
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Cable the Hardware
Once all of the hardware has been installed, install all power and network cabling for the hardware using the information and instructions in the documentation for the specific hardware product. Refer to Related Documentation for links to the hardware product documentation. Ensure that you install your network cables according to your network plan.
Configure the Switches
All of the switches must be configured according to your planned network specifications.
NOTE: Refer to Network Planning for information and consideration for planning your network.
Refer to the user documentation for each of the switches for configuration information and instructions:
■ Catalyst C2960XR-48TD-I: http://www.cisco.com/c/en/us/support/switches/catalyst-2960xr-48td-i- switch/model.html
■ Catalyst 3850 48T-S: http://www.cisco.com/c/en/us/support/switches/catalyst-3850-48t-s-switch/model.html
■ Nexus 93180-YC-EX: http://www.cisco.com/c/en/us/support/switches/nexus-93180yc-fx-switch/model.html
■ Nexus 9236C: http://www.cisco.com/c/en/us/support/switches/nexus-9236c-switch/model.html
Prepare the UCS C-Series Hardware
UCS-C hardware preparation is performed through the Cisco Integrated Management Controller (CICM). The tables in the following sections list the non-default parameters that must be configured per server type:
■ Prepare the OSP-D Server / Staging Server
■ Prepare the Controller Nodes
■ Prepare the Compute Nodes
■ Prepare the OSD Compute Nodes
■ Prepare the Ceph Nodes
Refer to the UCS C-series product documentation for more information:
■ UCS C-Series Hardware: https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-c240-m4-rack- server/model.html
■ CICM Software: https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-c-series-integrated- management-controller/tsd-products-support-series-home.html
NOTE: Part of the UCS server preparation is the configuration of virtual drives. If there are virtual drives present which need to be deleted, select the Virtual Drive Info tab, select the virtual drive you wish to delete, then click Delete Virtual Drive. Refer to the CIMC documentation for more information.
58
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Prepare the OSP-D Server / Staging Server
Table 38 OSP-D Server / Staging Server Paramaters Parameters and Settings Description
CICM Utility Setup
Enable IPV4 Configures parameters for the dedicated management port.
Dedicated
No redundancy
IP address
Subnet mask
Gateway address
DNS address
Admin > User Management
Username Configures administrative user credentials for accessing the CICM utility. Password
Admin > Communication Services
IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port.
Server > BIOS > Configure BIOS > Advanced
Intel(R) Hyper-Threading Technology Disable hyper-threading on server CPUs to optimize Ultra M system = Disabled performance.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info
Status = Unconfigured Good Ensures that the hardware is ready for use.
Prepare the Controller Nodes
Table 39 Controller Node Parameters Parameters and Settings Description
CICM Utility Setup
59
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Enable IPV4 Configures parameters for the dedicated management port.
Dedicated
No redundancy
IP address
Subnet mask
Gateway address
DNS address
Admin > User Management
Username Configures administrative user credentials for accessing the CICM utility. Password
Admin > Communication Services
IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port.
Admin > Communication Services
IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port.
Server > BIOS > Configure BIOS > Advanced
Intel(R) Hyper-Threading Technology Intel(R) Hyper-Threading Technology = Disabled = Disabled
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info
Status = Unconfigured Good Ensures that the hardware is ready for use.
Storage > Cisco 12G SAS Modular RAID Controller > Controller Info
60
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Virtual Drive Name = OS Creates the virtual drives required for use by the operating system (OS). Read Policy = No Read Ahead
RAID Level = RAID 1
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Prepare the Compute Nodes
Table 40 Compute Node Parameters Parameters and Settings Description
CICM Utility Setup
Enable IPV4 Configures parameters for the dedicated management port.
Dedicated
No redundancy
IP address
Subnet mask
Gateway address
DNS address
Admin > User Management
Username Configures administrative user credentials for accessing the CICM utility. Password
61
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Admin > Communication Services
IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port.
Server > BIOS > Configure BIOS > Advanced
Intel(R) Hyper-Threading Technology Intel(R) Hyper-Threading Technology = Disabled = Disabled
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info
Status = Unconfigured Good Ensures that the hardware is ready for use.
Storage > Cisco 12G SAS Modular RAID Controller > Controller Info
Virtual Drive Name = OS Creates the virtual drives required for use by the operating system (OS). Read Policy = No Read Ahead
RAID Level = RAID 1
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OS
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Prepare the OSD Compute Nodes
NOTE: OSD Compute Nodes are only used in Hyper-Converged Ultra M models as described in UCS C-Series Servers.
Table 41 - OSD Compute Node Parameters Parameters and Settings Description
CICM Utility Setup
62
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Enable IPV4 Configures parameters for the dedicated management port.
Dedicated
No redundancy
IP address
Subnet mask
Gateway address
DNS address
Admin > User Management
Username Configures administrative user credentials for accessing the CICM utility. Password
Admin > Communication Services
IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port.
Server > BIOS > Configure BIOS > Advanced
Intel(R) Hyper-Threading Technology Intel(R) Hyper-Threading Technology = Disabled = Disabled
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info
Status = Unconfigured Good Ensures that the hardware is ready for use.
SLOT-HBA Physical Drive Numbers = Ensure the UCS slot host-bus adapter for the drives are configured accordingly. 1
2
3
7
8
9
10
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 1
63
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Virtual Drive Name = BOOTOS Creates a virtual drive leveraging the storage space available to physical drive number 1. Read Policy = No Read Ahead NOTE: Ensure that the size of this virtual drive is less than the size of RAID Level = RAID 1 the designated journal and storage drives. Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 139236 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 1
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Set as Boot Drive Sets the BOOTOS virtual drive as the system boot drive.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 2
Virtual Drive Name = BOOTOS Creates a virtual drive leveraging the storage space available to physical drive number 2. Read Policy = No Read Ahead NOTE: Ensure that the size of this virtual drive is less than the size of RAID Level = RAID 1 the designated journal and storage drives. Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 139236 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 2
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Set as Boot Drive Sets the BOOTOS virtual drive as the system boot drive.
64
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 3
Virtual Drive Name = JOURNAL Creates a virtual drive leveraging the storage space available to physical drive number 3. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 456809 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, JOURNAL, Physical Drive Number = 3
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 7
Virtual Drive Name = OSD1 Creates a virtual drive leveraging the storage space available to physical drive number 7. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD1, Physical Drive Number = 7
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 8
65
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Virtual Drive Name = OSD2 Creates a virtual drive leveraging the storage space available to physical drive number 8. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD2, Physical Drive Number = 8
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 9
Virtual Drive Name = OSD3 Creates a virtual drive leveraging the storage space available to physical drive number 9. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD3, Physical Drive Number = 9
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 10
66
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Virtual Drive Name = OSD4 Creates a virtual drive leveraging the storage space available to physical drive number 10. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD4, Physical Drive Number = 10
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Prepare the Ceph Nodes
NOTE: Ceph Nodes are only used in non-Hyper-Converged Ultra M models as described in UCS C-Series Servers.
Table 42 Ceph Node Parameters Parameters and Settings Description
CICM Utility Setup
Enable IPV4 Configures parameters for the dedicated management port.
Dedicated
No redundancy
IP address
Subnet mask
Gateway address
DNS address
Admin > User Management
Username Configures administrative user credentials for accessing the CICM utility. Password
67
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Admin > Communication Services
IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port.
Server > BIOS > Configure BIOS > Advanced
Intel(R) Hyper-Threading Technology Intel(R) Hyper-Threading Technology = Disabled = Disabled
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info
Status = Unconfigured Good Ensures that the hardware is ready for use.
SLOT-HBA Physical Drive Numbers = Ensure the UCS slot host-bus adapter for the drives are configured accordingly. 1
2
3
7
8
9
10
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 1
Virtual Drive Name = BOOTOS Creates a virtual drive leveraging the storage space available to physical drive number 1. Read Policy = No Read Ahead NOTE: Ensure that the size of this virtual drive is less than the size of RAID Level = RAID 1 the designated journal and storage drives. Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 139236 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 1
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
68
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Set as Boot Drive Sets the BOOTOS virtual drive as the system boot drive.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 2
Virtual Drive Name = BOOTOS Creates a virtual drive leveraging the storage space available to physical drive number 2. Read Policy = No Read Ahead NOTE: Ensure that the size of this virtual drive is less than the size of RAID Level = RAID 1 the designated journal and storage drives. Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 139236 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 2
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Set as Boot Drive Sets the BOOTOS virtual drive as the system boot drive.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 3
Virtual Drive Name = JOURNAL Creates a virtual drive leveraging the storage space available to physical drive number 3. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 456809 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, JOURNAL, Physical Drive Number = 3
69
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 7
Virtual Drive Name = OSD1 Creates a virtual drive leveraging the storage space available to physical drive number 7. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD1, Physical Drive Number = 7
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 8
Virtual Drive Name = OSD2 Creates a virtual drive leveraging the storage space available to physical drive number 8. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD2, Physical Drive Number = 8
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
70
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 9
Virtual Drive Name = OSD3 Creates a virtual drive leveraging the storage space available to physical drive number 9. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD3, Physical Drive Number = 9
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 10
Virtual Drive Name = OSD4 Creates a virtual drive leveraging the storage space available to physical drive number 10. Read Policy = No Read Ahead
RAID Level = RAID 0
Cache Policy: Direct IO
Strip Size: 64KB
Disk Cache Policy: Unchanged
Access Policy: Read Write
Size: 1143455 MB
Write Policy: Write Through
Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD4, Physical Drive Number = 10
Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background.
71
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Deploy the Virtual Infrastructure Manager
Within the Ultra M solution, OpenStack Platform Director (OSP-D) functions as the virtual infrastructure manager (VIM).
Deploying the VIM includes the following procedures:
■ Deploy the Undercloud
■ Deploy the Overcloud
Deploy the Undercloud
The Undercloud is the primary OSP-D node that is responsible for the deployment, configuration, and management of nodes (controller, compute, and Ceph storage) in the Overcloud. Within Ultra M, the OSP-D Server / Staging Server is the Undercloud.
Deploying the Undercloud involves installing and configuring Red Hat Enterprise Linux (RHEL) and OSP-D as shown in the workflow depicted in Figure 15.
NOTE: The information in this section assumes that that the OSP-D Server / Staging Server was configured as described in Prepare the OSP-D Server / Staging Server.
Figure 15 - Workflow for Installing and Configuring Red Hat and OSP-D on a UCS-C Server
Install RHEL
General installation information and procedures are located in the RHEL and OSP-D product documentation:
■ https://access.redhat.com/documentation/en/red-hat-enterprise-linux/
■ https://access.redhat.com/documentation/en/red-hat-openstack-platform/
Prior to installing these products on the OSP-D Server/Staging Server, refer to Table 43 for settings required by Ultra M. Depending on the Ultra M mode
72
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
■ Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
■ Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
NOTE: Table 43
Table 43 OSP-D Server Red Hat Installation Settings Parameters and Settings Description
Installation Summary > Network & Host Name > Ethernet (enp16s0f0) > Configure > IPv4 Setting
IP Address Configure and save settings for the network interface by which the OSP-D server can be accessed externally. Netmask NOTE: The first, or top-most interface shown in the list in the Gateway Network & Host Name screen should be used as the external interface for the OSP-D server. DNS Server
Search Domain
Installation Summary > Installation Destination > CiscoUCSC-MRAID12G (sda) > I will configure partitioning > Click here to create them automatically
Mount Point = / Hat. Desired Capacity = 500 GB NOTE: Do not use LVM-based partitioning. Instead, delete /home/ and / .
Installation Summary > KDUMP
kdump = disabled It is recommended that kdump be disabled.
Installation Summary > Begin Installation > Root Password
Root Password Configure and confirm a the root user password.
Configure RHEL
Once RHEL is installed, you must configure it prior to installing the OpenStack Undercloud. Table 44 identifies the RHEL
specific settings are available here:
■ Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
■ Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
NOTE: The parameters described in Table 44 assume configuration through the Red Hat command line interface.
73
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Table 44 - OSP-D Server Red Hat Configuration Parameters and Settings Description
Non-root user account: The OSP-D installation process requires a non-root user called stack to execute commands. User = stack NOTE: It is recommended that you disable password requirements Password = stack for this user when using sudo. This is done using the NOPASSWD:ALL parameter in sudoers.d as described in the OpenStack documentation.
New directories: OSP-D uses system images and Heat templates to create the Overcloud environment. These files must be organized and stored in /images the stack /templates
Subscription manager HTTP proxy If needed, the subscription manager service can be configured to use a HTTP proxy.
System hostname Configures a fully qualified domain name (FQDN) for use on the OSP-D server.
NOTE: Be sure to configure the hostname as both static and transient.
NOTE: Beyond using the hostnamectl command, make sure the FQDN is also configured in the etc/hosts file.
Attach the subscription-manager Attaches the RHEL installation to the subscription- Pool ID entitlement pool id as displayed in the output of the sudo subscription-manager list --available all command.
Disable all default content Disables the default content repositories associated with the repositories entitlement certificate.
Enable Red Hat repositories: Installs packages required by OSP-D and provides access to the latest server and OSP-D entitlements. rhel-7-server-rpms NOTE: As indicated in Software Specifications, the supported rhel-7-server-extras-rpms OpenStack version differs based on the Ultra M model being deployed. If OpenStack 9 is required, only enable rhel-7-server- rhel-7-server-rh-common-rpms openstack-9-rpms and rhel-7-server-openstack-9-director-rpms. If rhel-7-ha-for-rhel7-server-rpms OpenStack 10 is required, only enable rhel-7-server-openstack-10- rpms. rhel-7-server-openstack-9-rpms
rhel-7-server-openstack-9-director- rpms
rhel-7-server-openstack-10-rpms
-tripleoc Installs command-line tools that are required for OSP-D installation package and configuration.
74
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Configure the undercloud.conf file with: interface.
undercloud_hostname NOTE: undercloud.conf is based on a template which is provided as part of the OSP-D installation. By default, this template is located in local_interface /usr/share/instack-undercloud/. This template must first be copied to the stack .
NOTE: The default setting for the local_interface should be eno1. This is the recommended local_interface setting.
Install OSP-D images: Download and install the disk images required by OSP-D for provisioning Overcloud nodes. rhosp-director-images NOTE: The installed image archives must be copied to the /images rhosp-director-images-ipa folder in the stack
Install libguestfs-tools virt-manager Install and start Virtual Machine Manager to improve debugging capability and to customize the overcloud image
NOTE: It is recommended that you enable root password authentication and permit root login in the SSHD configuration. This is done using the following command:
virt-customize -a overcloud-full.qcow2 --root-password password:
sudo virt-customize -a overcloud-full.qcow2 --run- command "systemctl disable chronyd"
Configure nameserver Specifies the nameserver IP address to be used by the Overcloud nodes to resolve hostnames through DNS.
Deploy the Overcloud
The Overcloud is consists of the Controller nodes, Compute nodes, Ceph storage nodes (for non-Hyper-Converged Ultra M models), and/or OSD Compute nodes (for Hyper-Converged Ultra M models).
Deploying the Overcloud involves the process workflow depicted in Figure 16.
75
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Figure 16 - Workflow for Installing and Configuring Ultra M Overcloud Nodes
NOTE: This information also assumes that the OSP-D Undercloud server was installed and configured as described in Deploy the Undercloud. In addition, the information in this section assumes that that the Overcloud node types as identified above were configured as described in the relevant sections of this document:
■ Prepare the Controller Nodes
■ Prepare the Compute Nodes
■ Prepare the OSD Compute Nodes
■ Prepare the Ceph Nodes
Table 45 id instructions for configuring these specific settings are available here:
■ Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
■ Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
NOTE: The parameters described in Table 45 assume configuration through the Red Hat command line interface.
76
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Table 45 OSP-D Overcloud Configuration Parameters Parameters and Settings Description
Configure the instackenv.json file instackenv.json defines a manifest of the server nodes that comprise with: the Overcloud. Cisco provides a version of this file to use as a template. mac Place this template in the stack use pm_user following parameters for all nodes (controllers, compute, and Ceph storage) based on your hardware: pm_password
pm_addr pm_user: The name of the CIMC administrative user for the server.
pm_
NOTE: Server names are pre-populated in the template provided by Cisco. It is highly recommend that you do not modify these names for your deployment.
Copy the OpenStack Environment Aspects of the OpenStack Overcloud environment are specified templates to /home/stack through custom configuration templates that override settings in the OSP- These files are contained in a /custom-templates directory that must be downloaded to the stack (e.g. /home/stack/custom-templates).
Configure the network.yaml template Specifies network parameters for use within the Ultra M overcloud. with: Descriptions for these parameters are provided in the network.yaml ExternalNetCidr file.
ExternalAllocationPools
ExterrnalInterfaceDefaultRoute
ExternalNetworkVlanID
77
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Configure the layout.yaml template Specifies parameters that dictate how to deploy the Overcloud with: nodes such as the NTP server used by the Overcloud, the number of each type of node, and the IP addresses used by each of those NtpServer nodes on the VIM networks.
ControllerCount ComputeCount
CephStorageCount
OsdComputeCount
ControllerIPs
ComputeIPs
OsdComputeIPs
Configure the controller-nics.yaml Configures network interface card (NIC) parameters for the controller template with: nodes.
ExternalNetworkVlanID
ExternalInterfaceDefaultRoute
Configure the only-compute- Configures NIC parameters for the compute nodes. nics.yaml template with:
ExternalNetworkVlanID
ExternalInterfaceDefaultRoute
Configure the compute-nics.yaml Configures NIC parameters for the compute nodes. template with:
ExternalNetworkVlanID
ExternalInterfaceDefaultRoute
Configure SR-IOV
The non-Hyper-Converged Ultra M models implement OSP-D version 9. This version of OSP-D requires the manual configuration of SR-IOV after the Overcloud has been deployed as described in this section.
NOTE: Ultra M XS Single VNF and Multi-VNF models are based on a later version of OSP-D in which SR-IOV configuration is performed as part of the Overcloud deployment.
SR-IOV is configured on each of the compute and controller Overcloud nodes. Table 46 provides information on the parameters that must be configured on the compute nodes and Table 47 provides information on the parameters that must be configured on the controller nodes. Instructions for configuring these specific settings are available here: 78
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
■ Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
■ Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
NOTE: The parameters described in Table 46 and Table 47 assume configuration through the Red Hat command line interface.
Table 46 Compute Node SR-IOV Configuration Parameters Parameters and Settings Description
Enable intel_iommu Edit the /etc/default/grub file and append the intel_iommu=on setting to the GRUB_CMDLINE_LINUX parameter.
NOTE: These changes must be saved to the boot grub using the following command:
grub2-mkconfig -o /boot/grub2/grub.cfg
NOTE: The compute node must be rebooted after executing this command.
Configure ixgbe.conf to support a Specifies the number of virtual F ports supported by ixgbe, the base maximum of 16 VFs PCIe NIC driver to be 16 using the follong command:
echo "options ixgbe max_vfs=16" >>/etc/modprobe.d/ixgbe.conf
NOTE: The Ramdisk must be rebuit after editing the ixgbe.conf file using the following command:
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
NOTE: The initramfs must be configured via Dracut to reflect the pointer using the following command:
dracut -f -v -M --install /etc/modprobe.d/ixgbe.conf
79
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Set the MTU for all NICs to 9000 Sets the MTU for each NIC to 9000 octets using the following command:
sed -i -re 's/MTU.*//g' /etc/sysconfig/network- scripts/ifcfg-
NOTE: To persist MTU configuration after the reboot, either disable the NetManager or add NM_CONTROLLED=no in the interface configuration:
service NetworkManager stop scho “NM_CONTROLLED=no” >> /etc/sysconfig/network-scripts/ifcfg-
NOTE: The compute nodes must be rebooted after configuring the interface MTUs.
Disable default repositories and enable the following repositories:
rhel-7-server-rpms
rhel-7-server-extras-rpms
rhel-7-server-openstack-9-rpms
rhel-7-server-openstack-9-director- rpms
rhel-7-server-rh-common-rpms
Install the sriov-nic-agent. Installs the SR-IOV NIC agent using the following command:
sudo yum install openstack-neutron-sriov-nic-agent
Enable NoopFirewallDriver in the The NoopFirewallDriver can be enabled using the following commands: /etc/neutron/plugin.ini openstack-config --set /etc/neutron/plugin.ini securitygroup firewall_driver neutron.agent.firewall.NoopFirewallDriver openstack-config --get /etc/neutron/plugin.ini securetygroup firewall_driver
80
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Configure neutron-server.service to --config-
Remove --config- setting.
NOTE: The OpenStack networking SR-IOV Agent must be started after executing the above commands. This is done using the following commands:
systemctl enable neutron-sriov-nic-agent.service systemctl start neutron-sriov-nic-agent.service
Associate the available VFs with each The VF-physical network associations can be made using the following physical network in the nova.conf file command:
openstack-config --set /etc/nova/nova.conf DEFAULT pci_passthrough_whitelist '[{"devname":"enp10s0f0", "physical_network":"phys_pcie1_0"},{"devname":"enp10s0f1", "physical_network":"phys_pcie1_1"},{"devname": "enp133s0f0", "physical_network":"phys_pcie4_0"},{"devname":"enp133s0f1", "physical_network":"phys_pcie4_1"}]'
NOTE: The nova-compute service must be restarted after modifying the nova.conf file using the following command:
systemctl restart openstack-nova-compute
Table 47 - Controller Node SR-IOV Configuration Parameters Parameters and Settings Description
Configure Neutron to support jumbo Sets dhcp-option-force=26,9000 in in dnsmasq-neutron.conf and MTU sizes global_physnet_mtu = 9000 in neutron.conf.
NOTE: The neutron server must be rebooted after making this configuration changes.
81
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Enable sriovnicswitch in the sriovnicswitch can be enabled using the following commands: /etc/neutron/plugin.ini file. openstack-config --set /etc/neutron/plugin.ini ml2 tenant_network_types vlan openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vlan openstack-config --set /etc/neutron/plugin.ini ml2 mechanism_drivers openvswitch,sriovnicswitch openstack-config --set /etc/neutron/plugin.ini ml2_type_vlan network_vlan_ranges datacentre:1001:1500
NOTE: The VLAN ranges m Appendix: Network Definitions (Layer 2 and 3). For example, The mappings are as follows:
■ datacenter = Other-Virtio
■ phys_pcie1(0,1) = SR-IOV (Phys-PCIe1)
■ phys_pcie4(0,1) = SR-IOV (Phys-PCIe4)
Add associated SRIOV physnets and SR-IOV physnets and VLAN ranges can be added using the following physnet network_vlan_range in the commands: /etc/neutron/plugins/ml2/ml2_conf_ openstack-config --set sriov.ini file. /etc/neutron/plugins/ml2/ml2_conf_sriov.ini ml2_sriov agent_required True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini ml2_sriov supported_pci_vendor_devs 8086:10ed openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic physical_device_mappings phys_pcie1_0:enp10s0f0,phys_pcie1_1:enp10s0f1,phys_pcie4_0:enp13 3s0f0,phys_pcie4_1:enp133s0f1 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic exclude_devices ''
Configure the neutron-server.service --config-
NOTE: The neutron-server service must be restarted to apply the configuration.
82
Deploying the Ultra M Soution
Obtaining Documentation and Submitting a Service Request
Parameters and Settings Description
Enable PciPassthrough and Allows proper scheduling of SRIOV devices. The compute scheduler on the AvailabilityZoneFilter under filters in controller node needs to use FilterScheduler and PciPassthroughFilter filters nova.conf using the following commands:
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_available_filters nova.scheduler.filters.all_filters openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,Compu teCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffini tyFilter,ServerGroupAffinityFilter,CoreFilter,PciPassthroughFilt er,NUMATopologyFilter,SameHostFilter,DifferentHostFilter,Aggrega teInstanceExtraSpecsFilter
NOTE: The nova conductor and scheduler must be restarted after executing the above commands.
Deploy the UAS, the VNFM, and the VNF
Once the OpenStack Undercloud and Overcloud VIM has been successfully deployed on the Ultra M hardware, you must deploy the USP-based VNF.
This process is performed through the Ulra Automation Services (UAS). UAS is an automation framework consisting of a set of software modules used to automate the USP-based VNF deployement and related components such as the VNFM.
These automation workflow are described in detail in the Ultra Service Platform Deployment Automation Guide. Refer to that document for more information.
83
Event and Syslog Management Within the Ultra M Solution
Hyper-Converged Ultra M solution models support a centralized monitor and management function. This function provides a central aggregation point for events (faults and alarms) and a proxy point for syslogs generated by the different components within the solution as identified in Table 48. The monitor and management function runs on the OSP-D Server which is also referred to as the Ultra M Manager Node.
Figure 17 Ultra M Manager Functions
The software to enable this functionality is distributed as a stand-alone RPM as described in Install the Ultra M Manager RPM. (It is not packaged with the Ultra Services Platform (USP) release ISO.) Once installed, additional configuration is required based on the desired functionality as described in the following sections:
■ Syslog Proxy
■ Event Aggregation
Syslog Proxy
The Ultra M Manager Node can be configured as a proxy server for syslogs received from UCS servers and/or OpenStack. As a proxy, the Ultra M Manager Node acts as a single logging collection point for syslog messages from these components and relays them to a remote collection server.
NOTES:
■ This functionality is currently supported only with Ultra M deployments based on OSP 10 and that leverage the Hyper-Converged architecture.
Cisco Systems, Inc. www.cisco.com
84
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
■ You must configure a remote collection server to receive and filter log files sent by the Ultra M Manager Node.
■ Though you can configure syslogging at any severity level your deployment scenario requires, it is recommended that you only configure syslog levels with severity levels 0 (emergency) through 4 (warning).
Once the Ultra M Manager RPM is installed, a script provided with this release allowing you to quickly enable syslog on the nodes and set the Ultra M Manager Node as the proxy. Leveraging inputs from a YAML-based configuration file, the script:
■ Inspects the nodes within the Undercloud and Overcloud
■ Logs on to each node
■ Enables syslogging at the specified level for both the UCS hardware and for the following OpenStack services on their respective nodes:
— Controller Nodes: Nova, Keystone, Glance, Cinder, and Ceph
— Compute Nodes: Nova
— OSD Compute Nodes: Nova, Ceph
NOTE: Syslogging support for Ceph is only available using Ultra M Manager 1.0.4.
■ Sets the Ultra M Manage
NOTE: The use of this script assumes that all of the nodes use the same login credentials.
CAUTION: Enabling/disabling syslog proxy functionality on OpenStack changes the service configuration files and automatically restarts the services. It is highly recommended that this process only be performed during a maintenance window.
To enable this functionality:
1. Install the Ultra M Manager bundle RPM using the instructions in Install the Ultra M Manager RPM.
NOTE: This step is not needed if the Ultra M Manager bundle was previously installed.
2. Become the root user.
sudo su
3. Verify that there are no previously existing configuration files for logging information messages in /etc/rsyslog.d.
a. Navigate to /etc/rsyslog.d.
cd /etc/rsyslog.d ls -al
Example output:
total 24 drwxr-xr-x. 2 root root 4096 Sep 3 23:17 . drwxr-xr-x. 152 root root 12288 Sep 3 23:05 .. -rw-r--r--. 1 root root 49 Apr 21 00:03 listen.conf -rw-r--r--. 1 root root 280 Jan 12 2017 openstack-swift.conf
b. Check the listen.conf file.
85
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
cat listen.conf
Example output:
$SystemLogSocketName /run/systemd/journal/syslog
c. Check the configuration of the openstack-swift.conf.
cat openstack-swift.conf
Example configuration:
# LOCAL0 is the upstream default and LOCAL2 is what Swift gets in # RHOS and RDO if installed with Packstack (also, in docs). # The breakout action prevents logging into /var/log/messages, bz#997983. local0.*;local2.* /var/log/swift/swift.log & stop
4. Configure IPTables.
a. On OSPD node, execute the following command to configure the IPTables.
$ vi /etc/sysconfig/iptables
b. Add the following lines at the beginning of filter configuration in the IPTables.
-A INPUT -p udp -m multiport --dports 514 -m comment --comment "514 - For UDP syslog" -m state -- state NEW -j ACCEPT
-A INPUT -p tcp -m multiport --dports 514 -m comment --comment "514 - For TCP syslog" -m state -- state NEW -j ACCEPT
c. Restart the IPTables service.
systemctl restart iptables
d. Verify the status of IPTables service.
systemctl status iptables
Example output:
iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled; vendor preset: disabled) Active: active (exited) since Mon 2017-11-20 13:31:08 EST; 10s ago Process: 3821 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=9) Process: 4258 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS) Main PID: 4258 (code=exited, status=0/SUCCESS)
Nov 20 13:31:08 tb3-ospd.mitg-bxb300.cisco.com systemd[1]: Starting IPv4 firewall with iptables... Nov 20 13:31:08 tb3-ospd.mitg-bxb300.cisco.com iptables.init[4258]: iptables: Applying firewall rules: [ OK ] Nov 20 13:31:08 tb3-ospd.mitg-bxb300.cisco.com systemd[1]: Started IPv4 firewall with iptables.
5. Enable syslogging to the external server by configuring the /etc/rsyslog.conf file.
vi /etc/rsyslog.conf
86
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
a. Enable TCP/UDP reception.
# provides UDP syslog reception $ModLoad imudp $UDPServerRun 514
# provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 514
b. Disable logging for private authentication messages.
# Don't log private authentication messages! #*.info;mail.none;authpriv.none;cron.none /var/log/messages
c. Configure the desired log severity levels.
# log 0-4 severity logs to external server 172.21.201.53 *.4,3,2,1,0 @
This enables the collection and reporting of logs with severity levels 0 (emergency) through 4 (warning).
CAUTION: Though it is possible to configure the system to locally store syslogs on the Ultra M Manager Node, it is highly recommended that you avoid doing so to avoid the risk of data loss and to preserve disk space.
6. Restart the syslog server.
systemctl restart rsyslog
7. Navigate to /etc.
cd /etc
8. Create and/or edit the ultram_cfg.yaml file based your VIM Orchestrator and VIM configuration. A sample of this configuration file is provided in Appendix: Example ultram_cfg.yaml File.
NOTE: The ultram_cfg.yaml file pertains to both the syslog proxy and event aggregation functionality. Some parts of this may have been configured in relation to the other function.
vi ultram_cfg.yaml
a. Optional.
under-cloud: OS_AUTH_URL:
b. Optional.
over-cloud: enabled: true environment: OS_AUTH_URL:
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
OS_REGION_NAME: regionOne
c. Specify the IP address of the Ultra M Manager Node to be the proxy server.
<-- SNIP --> rsyslog: level: 4,3,2,1,0 proxy-rsyslog:
NOTE:
— You can modify the syslog levels to report according to your requirements using the level parameter as shown above.
—
— If you are copying the above information from an older configuration, make sure the proxy-rsyslog IP address does not contain a port number.
d. Optional. Configure the CIMC login information for each of the nodes on which syslogging is to be enabled.
ucs-cluster: enabled: true user:
NOTE: The use of this script assumes that all of the nodes use the same login credentials.
9. Navigate to /opt/cisco/usp/ultram-manager.
cd /opt/cisco/usp/ultram-manager
10. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
NOTE: end of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
11. Optional. Disable rsyslog if it was previously configured on the UCS servers.
./ultram_syslogs.py --cfg /etc/ultram_cfg.yaml –u -d
12. Execute the ultram_syslogs.py script to load the configuration on the various nodes.
./ultram_syslogs.py --cfg /etc/ultram_cfg.yaml -o –u
NOTE: Additional command line options for the ultram_syslogs.py script can be seen by entering ultram_syslogs.py help at the command prompt. An example of the output of this command is below:
usage: ultram_syslogs.py [-h] -c CFG [-d] [-u] [-o]
optional arguments: -h, --help show this help message and exit -c CFG, --cfg CFG Configuration file -d, --disable-syslog Disable Syslog -u, --ucs Apply syslog configuration on UCS servers -o, --openstack Apply syslog configuration on OpenStack 88
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
Example output:
2017-09-13 15:24:23,305 - Configuring Syslog server 192.200.0.1:514 on UCS cluster 2017-09-13 15:24:23,305 - Get information about all the nodes from under-cloud 2017-09-13 15:24:37,178 - Enabling syslog configuration on 192.100.3.5 2017-09-13 15:24:54,686 - Connected. 2017-09-13 15:25:00,546 - syslog configuration success. 2017-09-13 15:25:00,547 - Enabling syslog configuration on 192.100.3.6 2017-09-13 15:25:19,003 - Connected. 2017-09-13 15:25:24,808 - syslog configuration success. <---SNIP--->
<---SNIP---> 2017-09-13 15:46:08,715 - Enabling syslog configuration on vnf1-osd-compute-1 [192.200.0.104] 2017-09-13 15:46:08,817 - Connected 2017-09-13 15:46:09,046 - - /etc/rsyslog.conf 2017-09-13 15:46:09,047 - Enabling syslog ... 2017-09-13 15:46:09,130 - Restarting rsyslog 2017-09-13 15:46:09,237 - Restarted 2017-09-13 15:46:09,321 - - /etc/nova/nova.conf 2017-09-13 15:46:09,321 - Enabling syslog ... 2017-09-13 15:46:09,487 - Restarting Services 'openstack-nova-compute.service'
13. Ensure that client log messages are being received by the server and are uniquely identifiable.
NOTES:
— If necessary, configure a unique tag and hostname as part of the syslog configuration/template for each client.
— Syslogs are very specific in terms of the file permissions and ownership. If need be, manually configure permissions for the log file on the client using the following command:
chmod +r
Event Aggregation
The Ultra M Manager Node can be configured to aggregate events received from different Ultra M components as identified in Table 48.
NOTE: This functionality is currently supported only with Ultra M deployments based on OSP 10 and that leverage the Hyper-Converged architecture.
Table 48 Component Event Sources Solution Component Event Source Type Details
UCS server hardware CIMC Reports on events collected from UCS C-series hardware via CIMC-based subscription.
These events are monitored in real-time.
89
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
Solution Component Event Source Type Details
VIM (Overcloud) OpenStack service Reports on OpenStack service fault events pertaining health to:
■ Failures (stopped, restarted)
■ High availability
■ Ceph / storage
■ Neutron / compute host and network agent
■ Nova scheduler (VIM instances)
By default, these events are collected during a 900 second polling interval as specified within the ultram_cfg.yaml file.
NOTE: In order to ensure optimal performance, it is strongly recommended that you do not change the default polling-interval.
UAS (AutoVNF, UEM, and ESC) UAS cluster/USP Reports on UAS service fault events pertaining to: management component events ■ Service failure (stopped, restarted)
■ High availability
■ AutoVNF
■ UEM
■ ESC (VNFM)
By default, these events are collected during a 900 second polling interval as specified within the ultram_cfg.yaml file.
NOTE: In order to ensure optimal performance, it is strongly recommended that you do not change the default polling-interval.
Events received from the solution components, regardless of the source type, are mapped against the Ultra M SNMP MIB (CISCO-ULTRAM-MIB.my, refer to Appendix: Ultra M MIB). The event data is parsed and categorized against the following conventions:
■ Fault code: convention within the Ultra M MIB for more information.
■ Severity: for more information. Since the Ultra M Manager Node aggregates events from different components within the solution, the severities supported within the Ultra M Manager Node MIB map to those for the specific components. Refer to Appendix: Ultra M Component Event Severity and Fault Code Mappings for details.
90
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
■ Domain: The component in which the fault occurred (e.g. UCS hardware, VIM, UEM, etc.). Refer to the r more information.
UAS and OpenStack events are monitored at the configured polling interval as described in Table 48. At the polling interval, the Ultra M Manager Node:
1. Collects data from UAS and OpenStack.
2. Generates/updates .log and .report files and an SNMP-based fault table with this information. It also includes related data about the fault such as the specific source, creation time, and description.
3. Processes any events that occurred:
a. If an error or fault event is identified, then a .error file is created and an SNMP trap is sent.
b. .
c. If no event occurred, then no further action is taken beyond 2.
UCS events are monitored and acted upon in real-time. When events occur, the Ultra M Manager generates a .log file and the SNMP fault table.
sent.
NOTE:
These processes are illustrated in Figure 18. Refer to About Ultra M Manager Log Files for more information.
91
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
Figure 18 Ultra M Manager Node Event Aggregation Operation
An example of the snmp_faults_table file is shown below and the entry syntax is described in Figure 19:
"0": [3, "neutonoc-osd-compute-0: neutron-sriov-nic-agent.service", 1, 8, "status known"], "1": [3, "neutonoc-osd-compute-0: ntpd", 1, 8, "Service is not active, state: inactive"], "2": [3, "neutonoc- osd-compute-1: neutron-sriov-nic-agent.service", 1, 8, "status known"], "3": [3, "neutonoc-osd- compute-1: ntpd", 1, 8, "Service is not active, state: inactive"], "4": [3, "neutonoc-osd-compute-2: neutron-sriov-nic-agent.service", 1, 8, "status known"], "5": [3, "neutonoc-osd-compute-2: ntpd", 1, 8, "Service is not active, state: inactive"],
Refer to About Ultra M Manager Log Files for more information.
Figure 19 SNMP Fault Table Entry Description
Each element in the SNMP Fault Table Entry corresponds to an object defined in the Ultra M SNMP MIB as described in Table 49. (Refer also to Appendix: Ultra M MIB.)
92
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
Table 49 SNMP Fault Entry Table Element Descriptions SNMP Fault Table Entry MIB Object Additional Details Element
Entry ID cultramFaultIndex A unique identifier for the entry
Fault Domain cultramFaultDomain The component area in which the fault occurred. The following domains are supported in this release:
■ hardware(1) : Harware including UCS servers
■ vim(3) : OpenStack VIM manager
■ uas(4) : Ultra Automation Services Modules
Fault Source cultramFaultSource Information identifying the specific component within the Fault Domain that generated the event.
The format of the information is different based on the Fault Domain. Refer to Table 50 for details.
Fault Severity cultramFaultSeverity The severity associated with the fault as one of the following:
■ emergency(1) : System level FAULT impacting multiple VNFs/Services
■ critical(2) : Critical Fault specific to VNF/Service
■ major(3) : component level failure within VNF/service.
■ alert(4) : warning condition for a service/VNF, may eventually impact service.
■ informational(5) : informational only, does not impact service
Refer to Appendix: Ultra M Component Event Severity and Fault Code Mappings for details on how these severities map to events generated by the various Ultra M components.
93
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
SNMP Fault Table Entry MIB Object Additional Details Element
Fault Code cultramFaultCode A unique ID representing the type of fault as. The following codes are supported:
■ other(1) : Other events
■ networkConnectivity(2) : Network Connectivity Failure Events
■ resourceUsage(3) : Resource Usage Exhausted Event
■ resourceThreshold(4) : Resource Threshold crossing alarms
■ hardwareFailure(5) : Hardware Failure Events
■ securityViolation(6) : Security Alerts
■ configuration(7) : Config Error Events
■ serviceFailure(8) : Process/Service failures
Refer to Appendix: Ultra M Component Event Severity and Fault Code Mappings for details on how these fault codes map to events generated by the various Ultra M components.
Fault Description cultramFaultDescription A message containing details about the fault.
Table 50 - cultramFaultSource Format Values FaultDomain Format Value of cultramFaultSource
Hardware (UCS
Where:
UAS
Where:
94
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
FaultDomain Format Value of cultramFaultSource
VIM
Fault and alarm collection and aggregation functionality within the Hyper-Converged Ultra M solution is configured and enabled through the ultram_cfg.yaml file. (An example of this file is located in Appendix: Example ultram_cfg.yaml File.) Parameters in this file dictate feature operation and enable SNMP on the UCS servers and event collection from the other Ultra M solution components. Parameters within this file can also be used to suppress faults. Refer to Configuring Fault Suppression for more information.
NOTE: Support for fault suppression is Ultra M Manager RPM 1.0.4.
To enable this functionality on the Ultra M solution:
1. Install the Ultra M Manager bundle RPM using the instructions in Install the Ultra M Manager RPM.
NOTE: This stop is not needed if the Ultra M Manager bundle was previously installed.
2. Become the root user.
sudo -i
3. Navigate to /etc.
cd /etc
4. Edit the ultram_cfg.yaml file based on your deployment scenario. An example of this file is located in Appendix: Example ultram_cfg.yaml File.
NOTE: The ultram_cfg.yaml file pertains to both the syslog proxy and event aggregation functionality. Some parts of this may have been configured in relation to the other function. In addition, refer to Configuring Fault Suppression for information on configuring this file to suppress faults.
5. Navigate to /opt/cisco/usp/ultram-manager.
cd /opt/cisco/usp/ultram-manager
4. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
NOTE: end of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
6. Start the Ultra M Manager Service.
NOTE: Subsequent configuration changes require you restart the health monitor service. Refer to Error! Reference source not found. for details.
95
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
7. Verify that the configuration by checking the the ultram_health.log file.
cat /var/log/cisco/ultram_health.log
Install the Ultra M Manager RPM
The Ultra M Manager functionality described in this chapter is enabled through the Ultra M Manager RPM. This RPM is distributed both as part of the USP ISO and as a separate RPM bundle from the USP ISO.
Ensure that you have access to this RPM bundle prior to proceeding with the instructions below.
To access the Ultra M Manager RPM packaged within the USP ISO, onboard the ISO and navigate to the ultram_health directory. Refer to the USP Deployment Automation Guide for instructions on onbarding the USP ISO.
1. Optional. Remove any previously installed versions of the Ultra M Manager per the instructions in Uninstalling the Ultra M Manager.
2. Log on to the OSP-D Server/Ultra M Manager Node.
3. Become the root user.
sudo -su
4. -manager -D Server/Ultra M Manager Node.
5. Navigate to the directory in which you copied the file.
6. Install the ultram-manager bundle RPM that was distributed with the ISO.
yum install -y ultram-manager-
A message similar to the following is displayed upon completion:
Installed: ultram-manager.x86_64 0:5.1.6-2
Complete!
7. Verify that the installation is successful.
yum list installed | grep ultram-manager
Look for a message similar to the one below within the output of the command:
ultram-manager.x86_64 342:1.0.1-1 installed
8. Verify that log rotation is enabled in support of the syslog proxy functionality by checking the logrotate file.
cd /etc/cron.daily ls -al
Example output:
total 28 drwxr-xr-x. 2 root root 4096 Sep 10 18:15 . drwxr-xr-x. 128 root root 12288 Sep 11 18:12 .. -rwx------. 1 root root 219 Jan 24 2017 logrotate -rwxr-xr-x. 1 root root 618 Mar 17 2014 man-db.cron
96
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
-rwx------. 1 root root 256 Jun 21 16:57 rhsmd
cat /etc/cron.daily/logrotate
Example output:
#!/bin/sh
/usr/sbin/logrotate -s /var/lib/logrotate/logrotate.status /etc/logrotate.conf EXITVALUE=$? if [ $EXITVALUE != 0 ]; then /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]" fi exit 0
9. Create and configure the ultram_health file.
cd /etc/logrotate.d vi ultram_health
/var/log/cisco/ultram-manager/* { size 50M rotate 30 missingok notifempty compress }
10. Proceed to either Syslog Proxy or Event Aggregation to configure the desired functionality.
Restarting the Ultra M Manager Service
In the event of configuration change or a server reboot, the Ultra M Manager service must be restarted.
To restart the health monitor service:
1. Check the Ultra M Manager Service Status.
2. Stop the Ultra M Manager Service.
3. Start the Ultra M Manager Service.
4. Check the Ultra M Manager Service Status.
Check the Ultra M Manager Service Status
It may be necessary to check the status of the Ultra M Manager service.
NOTE: These instructions assume that you are already logged into the OSP-D Server/Ultra M Manager Node as the root user.
To check the Ultra M Manager status:
1. Check the service status.
service ultram_health.service status
Example Output Inactive Service:
97
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
Redirecting to /bin/systemctl status ultram_health.service ● ultram_health.service - Cisco UltraM Health monitoring Service Loaded: loaded (/etc/systemd/system/ultram_health.service; enabled; vendor preset: disabled) Active: inactive (dead)
Example Output Active Service:
Redirecting to /bin/systemctl status ultram_health.service ● ultram_health.service - Cisco UltraM Health monitoring Service Loaded: loaded (/etc/systemd/system/ultram_health.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2017-09-10 22:20:20 EDT; 5s ago Main PID: 16982 (start_ultram_he) CGroup: /system.slice/ultram_health.service ├─16982 /bin/sh /usr/local/sbin/start_ultram_health ├─16983 python /opt/cisco/usp/ultram-manager/ultram_health.py /etc/ultram_cfg.yaml ├─16991 python /opt/cisco/usp/ultram-manager/ultram_health.py /etc/ultram_cfg.yaml └─17052 /usr/bin/python /bin/ironic node-show 19844e8d-2def-4be4-b2cf-937f34ebd117
Sep 10 22:20:20 ospd-tb1.mitg-bxb300.cisco.com systemd[1]: Started Cisco UltraM Health monitoring Service. Sep 10 22:20:20 ospd-tb1.mitg-bxb300.cisco.com systemd[1]: Starting Cisco UltraM Health monitoring Service... Sep 10 22:20:20 ospd-tb1.mitg-bxb300.cisco.com start_ultram_health[16982]: 2017-09-10 22:20:20,411 - UCS Health Check started
2. Check the status of the mongo process.
ps -ef | grep mongo
Example output:
mongodb 3769 1 0 Aug23 ? 00:43:30 /usr/bin/mongod --quiet -f /etc/mongod.conf run
Stop the Ultra M Manager Service
It may be necessary to stop the Ultra M Manager service under certain circumstances.
NOTE: These instructions assume that you are already logged into the OSP-D Server/Ultra M Manager Node as the root user.
To stop the health monitor service, enter the following command from the /opt/cisco/usp/ultram-manager directory:
./service ultram_health.service stop
Start the Ultra M Manager Service
It is necessary to start/restart the Ultra M Manager service in order to execute configuration changes and or after a reboot of the OSP-D Server/Ultra M Manager Node.
NOTE: These instructions assume that you are already logged into the OSP-D Server/Ultra M Manager Node as the root user.
To start the Ultra M Manager service, enter the following command from the /opt/cisco/usp/ultram-manager directory:
./service ultram_health.service start
98
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
Uninstalling the Ultra M Manager
Ultra M Manager, you must uninstall it before installing newer releases.
To uninstall the the Ultra M Manager:
1. Log on the OSP-D Server/Ultra M Manager Node.
2. Become the root user.
sudo -i
3. Make a backup copy of the existing configuring file (e.g. /etc/ultram_cfg.yaml).
4. Check the installed version.
yum list installed | grep ultra
Example output:
ultram-manager.x86_64 5.1.3-1 installed
5. Uninstall the previous version.
yum erase ultram-manager
Example output:
Loaded plugins: enabled_repos_upload, package_upload, product-id, search-disabled-repos, subscription-manager, versionlock Resolving Dependencies --> Running transaction check ---> Package ultram-manager.x86_64 0:5.1.5-1 will be erased --> Finished Dependency Resolution
Dependencies Resolved
======Package Arch Version Repository Size ======Removing: ultram-health x86_64 5.1.5-1 installed 148 k
Transaction Summary ======Remove 1 Package
Installed size: 148 k Is this ok [y/N]:
Enter y at the prompt to continue.
A message similar to the following is displayed upon completion:
Removed: ultram-health.x86_64 0:5.1.3-1
99
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
Complete! Uploading Enabled Reposistories Report Loaded plugins: product-id, versionlock
6. Proceed to Install the Ultra M Manager RPM.
Encrypting Passwords in the ultram_cfg.yaml File
The ultram_cfg.yaml file requires the specification of passwords for the managed components. These passwords are entered in clear text within the file. To mitigate security risks, the passwords should be encrypted before using the file to deploy Ultra M Manager-based features/functions.
To encrypt the passwords, the Ultra M Manager provides a script called utils.py in the /opt/cisco/usp/ultram-manager/ directory. The script can be run against your ultram_cfg.yaml file by navigating to that directory and executing the following command as the root user:
utils.py --secure-cfg /etc/ultram_cfg.yaml
IMPORTANT: Data is encrypted using AES via a 256 bit key that is stored in the MongoDB. As such, an OSPD user on OSPD is able to access this key and possibly decrypt the passwords. (This includes the stack user as it has sudo access.)
Executing this scripts encrypts the passwords in file (e.g. ultram_cfg.yamlencrypted: true) to indicate that the passwords have been encrypted.
IMPORTANT: Do not rename the file once the filename has been changed.
If need be, you can make edits to parameters other than the passwords within the ultram_cfg.yaml file after encrypting the passwords.
For new installations, run the script to encrypt the passwords before applying the configuration and starting the Ultra M Manager service as described in Syslog Proxy and Event Aggregation.
To encrypt passwords for exsiting installations:
1. Stop the Ultra M Manager Service.
2. Optional. Installing an updated version of the Ultra M Manager RPM.
a. Save a copy of your ultram_cfg.yaml file to alternate location outside of the Ultra M Manager installation.
b. Uninstall the Ultra M Manager using the instructions in Uninstalling the Ultra M Manager.
c. Install the new Ultra M Manager version using the instructions in Install the Ultra M Manager RPM.
d. Copy your backed-up ultram_cfg.yaml file to the /etc directory.
3. Navigate to /opt/cisco/usp/ultram-manager/.
cd /opt/cisco/usp/ultram-manager/
4. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
100
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
NOTE: of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
5. Start the Ultra M Manager Service.
Configuring Fault Suppression
The ultram_cfg.yaml file provides parameters that can be used to suppress faults at the following levels:
■ UCS server:
— UCS cluster: All events for all UCS nodes are suppressed.
— UCS fault object distinguished names (DNs): All events for one or more specified UCS object DNs within are suppressed.
— UCS faults: One or more specified UCS faults are suppressed.
NOTE: Fault suppression can be simultaneously configured at both the UCS object DN and fault levels.
■ UAS components (AutoVNF, UEM, and the VNFM (ESC)):
— UAS component cluster: All events for all UAS components are suppressed.
— UAS component events: One or more specified UAS component events are suppressed.
When faults are suppressed, event monitoring occurs as usual and the log report files shows the faults. However, suppressed faults are not reported over SNMP.
NOTE: Support for fault suppression is Ultra M Manager RPM 1.0.4.
Suppressing UCS Server Faults
Suppressing UCS Cluster Faults
Suppressing faults at the cluster-level stops the aggregation of all events for all UCS nodes in the deployment.
To suppress faults for the UCS cluster:
1. Log on to the OSP-D Server/Ultra M Manager Node.
2. Become the root user.
sudo su
3. Navigate to /etc.
cd /etc
4. Edit the ultram_cfg.yaml file to suppress traps at the cluster level:
<---SNIP---> ucs-cluster enabled: false
<---SNIP---> 101
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
5. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
NOTE: end of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
6. Start the Ultra M Manager Service.
NOTE: Subsequent configuration changes require you restart the health monitor service. Refer to Error! Reference source not found. for details.
7. Verify that the configuration by checking the the ultram_health.log file.
cat /var/log/cisco/ultram_health.log
Suppressing Faults for Specific UCS DN
UCS incorporates the concept of Distinguished Namespaces (DN) where each entity is been assigned unique ID or namespace. Suppressing events for a given UCS fault object distinguished name (DN) stops the aggregation of all events related to the DN. Suppression can be enabled for one or more DNs.
Refer to the latest Cisco UCS Faults and Error Messages Reference Guide for more information (https://www.cisco.com/c/en/us/support/servers-unified-computing/unified-computing-system/products-system- message-guides-list.html).
NOTE: Fault suppression can be simultaneously configured at both the UCS object DN and fault levels.
To suppress faults for one or more UCS fault object DNs:
1. Log on to the OSP-D Server/Ultra M Manager Node.
2. Become the root user.
sudo su
3. Navigate to /etc.
cd /etc
4. Edit the ultram_cfg.yaml file to suppress UCS fault object DN-level events:
<---SNIP---> ucs-cluster enabled: true suppress: affectedDN: -
<---SNIP--->
For example, to suppress faults related to all flexflash devices and all faults pertaining to power supply configure the ultram_cfg.yaml file as follows:
102
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
<---SNIP---> ucs-cluster: enabled: true suppress: affectedDN: - sys/rack-unit-1/psu-2 - sys/rack-unit-1/board/storage-flexflash-FlexFlash
<---SNIP--->
5. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
NOTE: end of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
6. Start the Ultra M Manager Service.
NOTE: Subsequent configuration changes require you restart the health monitor service. Refer to Error! Reference source not found. for details.
7. Verify that the configuration by checking the the ultram_health.log file.
cat /var/log/cisco/ultram_health.log
Suppressing Specific UCS Server Faults
Suppressing events for a given UCS fault stops the aggregation of that event. Suppression can be enabled for one or more events.
Refer to the latest Cisco UCS Faults and Error Messages Reference Guide for more information (https://www.cisco.com/c/en/us/support/servers-unified-computing/unified-computing-system/products-system- message-guides-list.html).
NOTE: Fault suppression can be simultaneously configured at both the UCS object DN and fault levels.
To suppress one or more specific UCS faults:
1. Log on to the OSP-D Server/Ultra M Manager Node.
2. Become the root user.
sudo su
3. Navigate to /etc.
cd /etc
4. Edit the ultram_cfg.yaml file to suppress UCS fault object DN-level events:
<---SNIP---> ucs-cluster enabled: true suppress: faults: -
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
… -
<---SNIP--->
For example, to suppress faults F1260 and F1261 ltram_cfg.yaml file as follows:
<---SNIP---> ucs-cluster: enabled: true suppress: faults: - F1260 - F1261
<---SNIP--->
5. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
NOTE: end of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
6. Start the Ultra M Manager Service.
NOTE: Subsequent configuration changes require you restart the health monitor service. Refer to Error! Reference source not found. for details.
7. Verify that the configuration by checking the the ultram_health.log file.
cat /var/log/cisco/ultram_health.log
Suppressing UAS Component Faults
Suppressing UAS Component Cluster Faults
Suppressing faults at the cluster-level suppresses all events for UAS components in the deployment (AutoVNF, UEM, and the VNFM (ESC)).
To suppress faults for the UAS cluster:
1. Log on to the OSP-D Server/Ultra M Manager Node.
2. Become the root user.
sudo su
3. Navigate to /etc.
cd /etc
4. Edit the ultram_cfg.yaml file to suppress traps at the cluster level:
<---SNIP---> uas-cluster enabled: false
104
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
<---SNIP--->
5. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
NOTE: end of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
6. Start the Ultra M Manager Service.
NOTE: Subsequent configuration changes require you restart the health monitor service. Refer to Error! Reference source not found. for details.
7. Verify that the configuration by checking the the ultram_health.log file.
cat /var/log/cisco/ultram_health.log
Suppressing Specific UAS Component Events
Suppressing events for a given UAS fault stops the aggregation of that event. Suppression can be enabled for one or more events.
The following events are defined for the UAS components (AutoVNF, UEM, and the VNFM (ESC)):
■ api-endpoint: ConfD or NCS is in a bad state.
■ cluster-ha: The UAS HA cluster is in a failed state due to an issue with Zookeeper.
■ ha-event: A UAS switchover event has occurred (one of the UAS VMs in a cluster rebooted and a switchover/failover has been performed resulting in the election of a new master).
■ overall: All of the above UAS events.
To suppress one or more specific UAS faults:
1. Log on to the OSP-D Server/Ultra M Manager Node.
2. Become the root user.
sudo su
3. Navigate to /etc.
cd /etc
4. Edit the ultram_cfg.yaml file to suppress traps at the cluster level:
<---SNIP---> uas-cluster: enabled: true log-file: '/var/log/cisco/ultram_uas.log' data-dir: '/opt/cisco/usp/ultram_health.data/uas' autovnf: 172.21.201.53: autovnf: login: 105
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
user: ubuntu password: ******* netconf: user: admin password: ******* suppress: -
<---SNIP--->
For example, to suppress api-endpoint and ha-event events for AutoVNF and UEM ultram_cfg.yaml file as follows:
as-cluster: enabled: true log-file: '/var/log/cisco/ultram_uas.log' data-dir: '/opt/cisco/usp/ultram_health.data/uas' autovnf: 172.21.201.53: autovnf: login: user: ubuntu password: ******* netconf: user: admin password: ******* suppress: - api-endpoint - ha-event em: login: user: ubuntu password: ******* netconf: user: admin password: ******* suppress: - api-endpoint - ha-event
<---SNIP--->
106
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
5. Encrypt the clear text passwords in the ultram_cfg.yaml file.
utils.py --secure-cfg /etc/ultram_cfg.yaml
NOTE: end of the file (e.g. ultram_cfg.yamlencrypted: true). Refer to Encrypting Passwords in the ultram_cfg.yaml File for more information.
6. Start the Ultra M Manager Service.
NOTE: Subsequent configuration changes require you restart the health monitor service. Refer to Error! Reference source not found. for details.
7. Verify that the configuration by checking the the ultram_health.log file.
cat /var/log/cisco/ultram_health.log
Configuring Disk Monitoring
The Ultra M Manager Node can be configured to monitor the health of the RAID controllers, physical disks, virtual disks, smart disks and battery backup unit of disks in the UCS hosts that comprise the Ultra M system.
Once the Ultra M Manager RPM is installed, the Ultra M Manager utility allows to perform monitoring of the host OS disks and the ceph cluster disks. This utility includes various options to collect the disk health information including detailed error information in the case of hardware failures.
NOTE: This functionality works with the Ultra M Manager version 1.0.7 and later.
The Ultra M Manager generates granular SNMP traps to report failures and thereby allowing failing or failed hardware to be replaced. The Ultra M Manager provides configurable thresholds for tuning the definition of disk errors wherever appropriate.
In addition to the disk monitoring, Ultra M Manager checks the ceph logs to determine any known error indications.
The ultram_cfg.yaml file provides parameters that can be used to configure the disk monitoring. Disk monitoring is enabled by default, and is triggered through the Ultra M Manager service.
The following table identifies the parameters related to disk monitoring.
Parameter Description Default Value
diskmon_cr_memory_correctable_errors_threshold Threshold value for maximum 5000 allowed memory correctable errors for disk controllers
diskmon_cr_memory_uncorrectable_errors_threshold Threshold value for maximum 1 allowed memory uncorrectable errors for disk controllers
diskmon_cv_battery_temparature_threshold Threshold value for maximum 55 allowed temperature in degree celcius for disk cache vault
107
Event and Syslog Management Within the Ultra M Solution
Obtaining Documentation and Submitting a Service Request
diskmon_pd_media_error_count_threshold Threshold value for maximum 100 allowed media error count for physical drive of disk
diskmon_pd_other_error_count_threshold Threshold value for maximum 25 allowed other error count for physical drive of disk
diskmon_pd_predictive_failure_count_threshold Threshold value for maximum 2 allowed predictive failure count for physical drive of disk
diskmon_fm_rootfs_memory_used_threshold Threshold value for maximum 80 memory of root filesystem in percentage
diskmon_debug_logging_enabled Flag to enable debugging logs for true disk monitoring
diskmon_raise_alarm_enabled Flag to enable raising of alarm for true disk monitoring
108
Appendix: Network Definitions (Layer 2 and 3)
Table 51 is intended to be used as a template for recording your Ultra M network Layer 2 and Layer 3 deployments.
Some of the Layer 2 and 3 networking parameters identified in Table 51 are configured directly on the UCS hardware via CIMC. Other parameters are configured as part of the Undercloud or Overcloud configuration. This configuration is done through various configuration files depending on the parameter:
■ undercloud.conf
■ network.yaml
■ layout.yaml
Table 51 Layer 2 and 3 Network Definition VLAN Network Gateway IP Range IP Range Description Where Routable ID / Start End Configured ? Range
External-Internet Meant for OSP-D Only
100 192.168.1. 192.168.1 Internet access On OSP-D Yes 0/24 .1 required: Server - 1 IP Address hardware for OSP-D - 1 IP for default gateway
External Floating IP Addresses (Virtio)
Cisco Systems, Inc. www.cisco.com
109
Appendix: Network Definitions (Layer 2 and 3)
Obtaining Documentation and Submitting a Service Request
VLAN Network Gateway IP Range IP Range Description Where Routable ID / Start End Configured ? Range
101 192.168.10 192.168. Routable network.yaml Yes .0/24 10.1 addresses and/or required: layout.yaml* - 3 IP addresses for Controllers - 1 VIP for master Controller Node - 4:10 Floating IP Addresses per VNF assigned to management VMs (CF, VNFM, UEM, and UAS software modules) - 1 IP for default gateway
Provisioning
105 192.0.0.0/ 192.200.0. 192.200.0. Required to undercloud. No 8 100 254 provision all conf configuration via PXE boot from OSP-D for Ceph, Controller and Compute. Intel-On-Board Port 1 (1G).
IPMI-CIMC
105 192.0.0.0/ 192.100.0. 192.100.0. On UCS No 8 100 254 servers through CIMC
Tenant (Virtio)
110
Appendix: Network Definitions (Layer 2 and 3)
Obtaining Documentation and Submitting a Service Request
VLAN Network Gateway IP Range IP Range Description Where Routable ID / Start End Configured ? Range
17 11.17.0.0/ All Virtio based network.yaml No 24 tenant and/or networks. layout.yaml* (MLOM)*
Storage (Virtio)
18 11.18.0.0/ Required for network.yaml No 24 Controllers, and/or Computes and layout.yaml* Ceph for read/write from and to Ceph. (MLOM)
Storage-MGMT (Virtio)
19 11.19.0.0/ Required for network.yaml No 24 Controllers and and/or Ceph only as layout.yaml* Storage Cluster internal network. (MLOM)
Internal-API (Virtio)
20 11.20.0.0/ Required for network.yaml No 24 Controllers and and/or Computes for layout.yaml* openstack manageability. (MLOM)
Mgmt (Virtio)
21 172.16.181 172.16.181. 172.16.181. Tenant based network.yaml No .0/24 100 254 virtio network and/or on openstack. layout.yaml*
Other-Virtio
1001: Tenant based network.yaml No 1500 virtio networks and/or on openstack. layout.yaml*
SR-IOV (Phys-PCIe1)
111
Appendix: Network Definitions (Layer 2 and 3)
Obtaining Documentation and Submitting a Service Request
VLAN Network Gateway IP Range IP Range Description Where Routable ID / Start End Configured ? Range
2101: Tenant SRIOV network.yaml Yes 2500 network on and/or openstack. layout.yaml* (Intel NIC on PCIe1)
SR-IOV (Phys-PCIe4)
2501: Tenant SRIOV network.yaml Yes 2900 network on and/or openstack. layout.yaml* (Intel NIC on PCIe4)
NOTE: Green text is provided as example configuration information. Your deployment requirements will vary.The IP addresses in bold blue text are the recommended address used for internal routing between VNF components. All other IP addresses and VLAN IDs may be changed/assigned.
* For non-Hyper-Converged Ultra M models based on OpenStack 9, the Layer 2 and 3 networking parameters must be configured in the networks.yaml file. For Hyper-Converged Ultra M models based on OpenStack 10, these parameters must configured in the both the networks.yaml and the layout.yaml files.
112
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
The Red Hat Enterprise Linux (RHEL) OpenStack Platform Director (OSP-D) uses two main concepts: an Undercloud and an Overcloud. The Undercloud installs and configures the Overcloud. This document describes the VIM deployment in terms of:
■ Deploying the Undercloud
■ Deploying the Overcloud
Prerequisites
Before beginning the VIM deployment process:
■ Ensure that the UCS C-Series Servers have been properly prepared based on the information provided in Prepare the UCS C-Series Hardware.
■ Ensure that you have the proper versions of Red Hat and OSP-D as identified in Software Specifications.
Deploying the Undercloud
NOTE: Before beginning, please ensure all Prerequisites are met.
Mount the Red Hat ISO Image
1. Login to OSP-D Server.
2. Launch KVM Console.
3. Select Virtual Media > Activate Virtual Devices. Accept the session and enable remembering your setting for future connections.
4. Select Virtual Media > Map CD/DVD and map the Red Hat ISO image.
5. Select Power > Reset System (Warm Boot) to reboot the system.
6. Upon restart, press F6 and select Cisco vKVM-Mapped vDVD1.22 and press ENTER.
7. Proceed to Install RHEL.
Cisco Systems, Inc. www.cisco.com
113
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
Install RHEL
NOTE: The procedure in this section represents a simplified version of the installation process that identifies the minimum number of parameters that must be configured.
1. Select the option to install Red Hat Enterprise Linux to begin the installation.
2. Select Software Selection > Minimum Install Only.
3. Configure Networks interfaces.
a. Select the first interface for Provisioning Networking.
b. Configure DNS Server
c. Select Manual IPv4 configuration.
d. Repeat the these steps for the external interface.
4. Select Date & Time and specify your Region and City.
5. Enable Network Time and configure NTP Servers.
6. Select Installation Destination and use ext4 file system.
NOTE:
7. Disable Kdump.
8. Set Root password only.
9. Begin the installation.
10. Proceed to Prepare Red Hat for the Undercloud Installation once the installation is complete.
Prepare Red Hat for the Undercloud Installation
1. Login to the OSP-D server using the floating IP.
2. The director installation process requires a non-root user to execute commands. Use the following commands to create the user named stack and set a password:
useradd stack passwd stack
3.
echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack stack ALL=(root) NOPASSWD:ALL chmod 0440 /etc/sudoers.d/stack
4.
su – stack
5. The director uses system images and Heat templates to create the Overcloud environment. To keep these files organized, create the directories for images and templates:
mkdir ~/images 114
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
mkdir ~/custom_templates
6. Use hostnamectl to set a hostname:
sudo hostnamectl set-hostname
7. -D - - as show below:
vi /etc/hosts
8. Complete the Red Hat registration as shown below and enable the repositories for OSP 10.
sudo subscription-manager register sudo subscription-manager list --available --all sudo subscription-manager attach --pool=
9. Perform an update on your system to make sure you have the latest base system packages and reboot the system.
sudo yum update –y sudo reboot
10. Proceed to Install the Undercloud when the reboot is complete.
Install the Undercloud
1. Login to the OSPD and execute the below command to install the required command line tools for director installation and configuration:
sudo yum install -y python-tripleoclient
2. From the set of files provided, copy the file undercloud.conf to /home/stack and make the following changes to suit the environment:
— Set undercloud_hostname
— Ensure local_interface is set to eno1
3. Install additonal packages and configures its services to suit the settings in undercloud.conf:
openstack undercloud install
NOTE: The installation and configuration also starts all OpenStack Platform services automatically.
4. Initialize the stack user to use the command line tools, run the following command:
source ~/stackrc
5. Install the rhosp-director-images and rhosp-director-images-ipa packages containing the required disk images for provisioning overcloud nodes:
115
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
sudo yum install rhosp-director-images rhosp-director-images-ipa
The required images these packages include are:
— An introspection kernel and ramdisk - Used for bare metal system introspection over PXE boot.
— A deployment kernel and ramdisk - Used for system provisioning and deployment.
— An overcloud kernel, ramdisk, and full image -
6. Extract the archives to the images directory in the stack
cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-9.0.tar /usr/share/rhosp-director- images/ironic-python-agent-latest-9.0.tar; do tar -xvf $i; done
7. For debug purposes, customize the overcloud image as follows:
sudo yum install libguestfs-tools virt-manager –y sudo systemctl start libvirtd virt-customize -a overcloud-full.qcow2 --root-password password:
8. Import these images into the director:
openstack overcloud image upload --image-path /home/stack/images/
9. View a list of images:
openstack image list
10. List the introspection images which have been copied into /httpboot.
ls –l /httpboot
11. Configure the nameserver required by the Overcloud nodes in order for them to be able to resolve hostnames through DNS:
neutron subnet-list neutron subnet-update
12. Proceed to Deploying the Overcloud to continue.
Deploying the Overcloud
NOTE: Before beginning, please ensure all Prerequisites and that the Undercloud was successfully installed as described in Deploying the Undercloud.
Node List Import
The director requires a node definition template. This file (instackenv.json) uses the JSON format file, and contains the hardware and power management details for your nodes.
1. Transfer the instackenv.json file to the /home/stack/ directory on the OSP-D server.
2. Edit the following parameters in the instackenv.json file: 116
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
— mac
— pm_user: The name of the CIMC administrative user for the server.
— pm_password sword.
— pm_addr
NOTE: These parameters must be specified for each of the server nodes including controllers, computes, and Ceph nodes.
3. Execute the following command to import the list of nodes into the OSP-D:
openstack baremetal import --json ~/instackenv.json
Check the status by executing the ironic node-list command several times until all nodes show a state of power off.
4. Assign the kernel and ramdisk images to all nodes:
openstack baremetal configure boot
5. Check the overcloud profile list for osd-computes, controller and compute:
openstack overcloud profiles list
NOTE: The nodes are now registered and configured in the OSP-D. The list of nodes is viewable by executing the ironic node-list command.
6. Proceed to Node Introspection to continue.
Node Introspection
OSP-D can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. To enable and verify node introspection:
1. Via Ironic, use IPMI to turn the servers on, collect their properties, and record them in the Ironic database:
openstack baremetal introspection bulk start
2. Verify the nodes completed introspection without errors:
openstack baremetal introspection bulk status
3. Proceed to Install the Overcloud to continue.
Install the Overcloud
1. -D server.
Confirm that the templates are present:
cd /home/stack/custom-templates/ ls -ltr total 56 -rw-rw-r--. 1 stack stack 3327 Mar 21 14:00 network.yaml -rw-rw-r--. 1 stack stack 7625 Mar 21 14:16 custom-roles.yaml -rw-rw-r--. 1 stack stack 735 Mar 21 14:20 ceph.yaml
117
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
-rw-rw-r--. 1 stack stack 508 Mar 21 14:28 post-deploy-template.yaml -rw-rw-r--. 1 stack stack 2283 Mar 21 14:29 numa-systemd-osd.sh -rw-rw-r--. 1 stack stack 853 Mar 21 14:29 wipe-disk.sh -rw-rw-r--. 1 stack stack 237 Mar 21 15:22 compute.yaml -rw-rw-r--. 1 stack stack 430 Mar 21 15:23 first-boot-template.yaml -rw-r--r--. 1 stack stack 2303 Mar 21 15:40 merged-first-boot.yaml -rw-r--r--. 1 stack stack 2158 Mar 21 15:41 first-boot.yaml -rw-rw-r--. 1 stack stack 3345 Mar 23 14:46 layout.yaml.3ctrl_3osds_12computes drwxrwxr-x. 2 stack stack 4096 Mar 23 15:02 nic-configs -rw-rw-r--. 1 stack stack 3105 Mar 23 17:42 layout.yaml
cd /home/stack/custom-templates/nic-configs ls -ltr total 40 -rw-rw-r--. 1 stack stack 6712 Mar 21 15:19 controller-nics.yaml -rw-r--r--. 1 stack stack 6619 Mar 21 15:35 only-compute-nics.yaml -rw-r--r--. 1 stack stack 6385 Mar 21 15:37 non-srio-compute-nics.yaml -rw-rw-r--. 1 stack stack 5388 Mar 23 00:18 compute-nics.yaml.old -rw-rw-r--. 1 stack stack 7000 Mar 23 15:02 compute-nics.yaml
Some of these files need minor edits to suit your environment as described in the steps that follow.
2. Edit the following parameters in the network.yaml file:
— ExternalNetCidr:
— ExternalAllocationPools: [{'start': '
NOTE: A minimum of 4 IP addresses are required: 1 per Controller Node, and 1 for the VIP address on the master Controller.
— ExternalInterfaceDefaultRoute:
— ExternalNetworkVlanID:
3. Edit the following parameters in the layout.yaml file:, edit the below fields to configure NTP, the number of controllers, computes and osd-computes.
— NtpServer:
— ControllerCount:
— ComputeCount:
— CephStorageCount:
— OsdComputeCount:
— ControllerIPs:
— ComputeIPs:
— OsdComputeIPs:
4. Edit the following parameters in the controller-nics.yaml file:
— ExternalNetworkVlanID: > default:
— ExternalInterfaceDefaultRoute: > default: '
'118
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
5. Edit the following parameters in the only-compute.yaml file:
— ExternalNetworkVlanID: > default:
— ExternalInterfaceDefaultRoute: > default: '
'6. Edit the following parameters in the compute-nics.yaml file:
— ExternalNetworkVlanID: > default:
— ExternalInterfaceDefaultRoute: > default: '
'7. Verify ironic nodes are available, powered off, and available for provisioning (not in maintenance mode):
openstack baremetal node list
8. Deploy the Overcloud:
deploy.sh
NOTE: To monitor the Overcloud deployment, open a separate terminal window and execute the heat stack-list command.
9. After overcloud deployment is completed, harden the system by adjusting the heat policy file.
sudo vi /etc/heat/policy.json
Replace "stacks:delete": "rule:deny_stack_user" with "stacks:delete": "rule:deny_everybody".
10. Proceed to Verify the Overcloud Installation.
Verify the Overcloud Installation
1. Determine the IP address of the controller-0:
nova list |grep controller-0
2. Login to the controller- -
ssh heat-admin@
3. Verify the health of the controller:
sudo pcs status
NOTE: sudo pcs resource cleanup command and re-execute the sudo pcs status command.
4. On the Controller nodes, execute & verify that ceph monitor function is active on the all three controllers by following command:
$ sudo ceph mon dump $ sudo ceph –s
5. Determine the IP address of the ceph-storage-0:
nova list | grep cephstorage-
6. Login to each of the Ceph node -
119
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
ssh heat-admin@
7. Check the status of each Ceph nodes:
$ sudo ceph –s $ sudo ceph osd tree $ sudo ceph df
8. Navigate back to the OSP-D server:
exit
9. Determine the IP address and admin password of the controller to be able to login to the OpenStack dashboard:
heat stack-list cat
10. Open your web browser and login to the OpenStack dashboard using the address and admin password obtained in the previous step.
11. Change the default password by selecting admin > Change Password.
12. Update the heat stack file with the new password.
vi
13. Proceed to Configure SR-IOV.
Configure SR-IOV
Enable VF Ports
1. Login to Staging Server to access compute nodes and switch to the stack user:
su – stack source stackrc
2. Determine the IP addresses of each compute node:
nova list | grep compute
3. Login to compute node:
ssh heat-admin@
4. Become root user.
sudo su
5. Edit the grub
vi /etc/default/grub GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet intel_iommu=on"
6. Save the changes to the boot grub.
grub2-mkconfig -o /boot/grub2/grub.cfg
7. Restart the compute node.
120
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
reboot
8.
ssh heat-admin@
9. Verify that the Intel NICs are correct model 82599ES:
lspci | grep -i 82599ES
10. Configure ixgbe with the required number of VF Ports:
modprobe -r ixgbe modprobe ixgbe max_vfs=16
11. Validate that the removal of ixgbe went through correctly:
ethtool -i enp10s0f0 | grep ^driver ethtool -i enp10s0f1 | grep ^driver ethtool -i enp133s0f0 | grep ^driver ethtool -i enp133s0f1 | grep ^driver
12. Configure 16 VF ports for ixgbe driver:
modprobe ixgbe max_vfs=16
13. Validate you have 16 VF ports for each physical NIC.
lspci |grep Ethernet
14. Edit the ixgbe.conf file to have correct VF ports:
echo "options ixgbe max_vfs=16" >>/etc/modprobe.d/ixgbe.conf
15. Rebuild Ramdisk after making ixgbe.conf file:
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
16. Prepair for the installation for ixgbe.conf file via Dracut so initramfs file reflects the pointer.
dracut -f -v -M --install /etc/modprobe.d/ixgbe.conf
17. Verify that Dracut update went through cleanly:
lsinitrd /boot/initramfs-3.10.0-327.28.3.el7.x86_64.img | grep -i ixgb
18. Verify that the VF ports are reflected correctly:
cat /sys/class/net/enp10s0f0/device/sriov_numvfs cat /sys/class/net/enp10s0f1/device/sriov_numvfs cat /sys/class/net/enp133s0f0/device/sriov_numvfs cat /sys/class/net/enp133s0f1/device/sriov_numvfs
19. Set the MTU for all NICs:
sed -i -re 's/MTU.*//g' /etc/sysconfig/network-scripts/ifcfg-enp10s0f0 echo -e ' MTU="9000"' >> /etc/sysconfig/network-scripts/ifcfg-enp10s0f0 sed -i -re '/^$/d' /etc/sysconfig/network-scripts/ifcfg-enp10s0f0 ip link set enp10s0f0 mtu 9000 121
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
sed -i -re 's/MTU.*//g' /etc/sysconfig/network-scripts/ifcfg-enp10s0f1 echo -e ' MTU="9000"' >> /etc/sysconfig/network-scripts/ifcfg-enp10s0f1 sed -i -re '/^$/d' /etc/sysconfig/network-scripts/ifcfg-enp10s0f1 ip link set enp10s0f1 mtu 9000 sed -i -re 's/MTU.*//g' /etc/sysconfig/network-scripts/ifcfg-enp133s0f0 echo -e ' MTU="2100"' >> /etc/sysconfig/network-scripts/ifcfg-enp133s0f0 sed -i -re '/^$/d' /etc/sysconfig/network-scripts/ifcfg-enp133s0f0 ip link set enp133s0f0 mtu 2100 sed -i -re 's/MTU.*//g' /etc/sysconfig/network-scripts/ifcfg-enp133s0f1 echo -e ' MTU="2100"' >> /etc/sysconfig/network-scripts/ifcfg-enp133s0f1 sed -i -re '/^$/d' /etc/sysconfig/network-scripts/ifcfg-enp133s0f1 ip link set enp133s0f1 mtu 2100
NOTE: To persist MTU configuration after the reboot, either disable the NetManager or add NM_CONTROLLED=no in the interface configuration:
service NetworkManager stop scho “NM_CONTROLLED=no” >> /etc/sysconfig/network-scripts/ifcfg-
20. Restart the compute node to ensure all the above changes are persistent:
reboot
21. Repeat steps 3 through 20 for all compute nodes.
22. Proceed to SR-IOV Configuration on Controllers.
SR-IOV Configuration on Controllers
1. Login to Staging Server to access compute nodes and switch to the stack user:
su – stack source stackrc
2. Determine the IP addresses of each controller node:
nova list | grep compute
3. Login to Controller node:
ssh heat-admin@
4. Become root user:
sudo su
5. Make Jumbo MTU Size on controller node:
vi /etc/neutron/dnsmasq-neutron.conf dhcp-option-force=26,9000 vi /etc/neutron/neutron.conf global_physnet_mtu = 9000
6. Restart Neutron-server:
sudo pcs resource restart neutron-server
7. Enable sriovnicswitch in the /etc/neutron/plugin.ini file: 122
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
openstack-config --set /etc/neutron/plugin.ini ml2 tenant_network_types vlan openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vlan openstack-config --set /etc/neutron/plugin.ini ml2 mechanism_drivers openvswitch,sriovnicswitch openstack-config --set /etc/neutron/plugin.ini ml2_type_vlan network_vlan_ranges datacentre:
NOTE: The VLAN ranges mus t match those specified in your network plan (refer to Appendix: Network Definitions (Layer 2 and 3)) and/or as per customer requirements. The mappings are as follows:
— datacenter Other-Virtio
— phys_pcie1(0,1) SR-IOV (Phys-PCIe1)
— phys_pcie4(0,1) SR-IOV (Phys-PCIe4)
8. Add associated SRIOV physnets and physnet network_vlan_range under /etc/neutron/plugins/ml2/ml2_conf_sriov.ini file:
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini ml2_sriov agent_required True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini ml2_sriov supported_pci_vendor_devs 8086:10ed openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic physical_device_mappings phys_pcie1_0:enp10s0f0,phys_pcie1_1:enp10s0f1,phys_pcie4_0:enp133s0f0,phys_pcie4_1:enp133s0f1 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic exclude_devices ''
9. Configure neutron- --config-file .
vi /usr/lib/systemd/system/neutron-server.service [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini --log-file /var/log/neutron/server.log
10. Restart the neutron-server service to apply the configuration:
sudo pcs resource restart neutron-server
11. Enable PciPassthrough and Availability Zone Filter for nova on Controllers to allow proper scheduling of SRIOV devices:
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_available_filters nova.scheduler.filters.all_filters
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilte r,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,CoreFilter,PciPassthroughFilter,NUMATopologyFil ter,SameHostFilter,DifferentHostFilter,AggregateInstanceExtraSpecsFilter
12. Restart the nova conductor:
sudo pcs resource restart openstack-nova-conductor
13. Restart the nova scheduler:
sudo pcs resource restart openstack-nova-scheduler
14. Verify that all OpenStack services are healthy.
123
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
sudo pcs status
15. Repeat steps 3 through 14 for all controller nodes.
16. Proceed to SR-IOV Configuration on Computes.
SR-IOV Configuration on Computes
1. Login to Staging Server to access compute nodes and switch to the stack user:
su – stack source stackrc
2. Determine the IP addresses of each controller node:
nova list | grep compute
3. Login to Compute node:
ssh heat-admin@
4. Become root user:
sudo su
5. Register the host system using Red Hat Subscription Manager, and subscribe to the required channels:
sudo subscription-manager register
6. Find the entitlement pool for the Red Hat OpenStack.
sudo subscription-manager list --available --all
7. Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 9 entitlements:
sudo subscription-manager attach --pool=
8. Disable all default repositories, and then enable the required Red Hat Enterprise Linux repositories:
sudo subscription-manager repos --disable=*
sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms -- enable=rhel-7-server-openstack-9-rpms --enable=rhel-7-server-openstack-9-director-rpms --enable=rhel-7- server-rh-common-rpms
9. -nic-
sudo yum install openstack-neutron-sriov-nic-agent
10. Enable NoopFirewallDriver in the /etc/neutron/plugin.ini.
openstack-config --set /etc/neutron/plugin.ini securitygroup firewall_driver neutron.agent.firewall.NoopFirewallDriver
11. Add mappings to the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini file:
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic physical_device_mappings phys_pcie1_0:enp10s0f0,phys_pcie1_1:enp10s0f1,phys_pcie4_0:enp133s0f0,phys_pcie4_1:enp133s0f1 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic exclude_devices ''
124
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
12. Configure neutron- --config-file --config-file
vi /usr/lib/systemd/system/neutron-sriov-nic-agent.service [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-sriov-nic-agent --config-file /usr/share/neutron/neutron-dist.conf -- config-file /etc/neutron/neutron.conf --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-sriov-nic-agent --log-file /var/log/neutron/sriov-nic-agent.log --config- file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini
13. Start the OpenStack Networking SR-IOV agent.
systemctl enable neutron-sriov-nic-agent.service systemctl start neutron-sriov-nic-agent.service
14. Associate the available VFs with each physical network. Define the entries in the the nova.conf file.
openstack-config --set /etc/nova/nova.conf DEFAULT pci_passthrough_whitelist '[{"devname":"enp10s0f0", "physical_network":"phys_pcie1_0"},{"devname":"enp10s0f1", "physical_network":"phys_pcie1_1"},{"devname": "enp133s0f0", "physical_network":"phys_pcie4_0"},{"devname":"enp133s0f1", "physical_network":"phys_pcie4_1"}]'
15. Apply the changes by restarting the nova-compute service.
systemctl restart openstack-nova-compute
16. Validate OpenStack status.
sudo openstack-status
17. Repeat steps 3 through 16 for all Compute nodes.
18. Proceed to Creating Flat Networks for Trunking on Service Ports.
Creating Flat Networks for Trunking on Service Ports
1. Login to Staging Server to access compute nodes and switch to the stack user:
su – stack source stackrc
2. Determine the IP addresses of each controller node:
nova list | grep compute
3. Login to Compute node:
ssh heat-admin@
4. Become root user:
sudo su
5. Enable support for Flat networks by making the following changes in both the /etc/neutron/plugin.ini and /etc/neutron/plugins/ml2/ml2_conf.ini files:
[ml2] 125
Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
# # From neutron.ml2 #
# List of network type driver entrypoints to be loaded from the # neutron.ml2.type_drivers namespace. (list value) #type_drivers = local,flat,vlan,gre,vxlan,geneve type_drivers = vlan,flat
[ml2_type_flat]
# # From neutron.ml2 #
# List of physical_network names with which flat networks can be created. Use # default '*' to allow flat networks with arbitrary physical_network names. Use # an empty list to disable flat networks. (list value) #flat_networks = * flat_networks = datacentre,phys_pcie1_0,phys_pcie1_1,phys_pcie4_0,phys_pcie4_1
6. Repeat steps 3 through 5 for each additional Compute node.
126
Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
Introduction
The Red Hat Enterprise Linux (RHEL) OpenStack Platform Director (OSP-D) uses two main concepts: an Undercloud and an Overcloud. The Undercloud installs and configures the Overcloud. This document describes the VIM deployment in terms of:
■ Deploying the Undercloud
■ Deploying the Overcloud
Prerequisites
Before beginning the VIM deployment process:
■ Ensure that the UCS C-Series Servers have been properly prepared based on the information provided in Prepare the UCS C-Series Hardware.
■ Ensure that you have the proper versions of Red Hat and OSP-D as identified in Software Specifications.
Deploying the Undercloud
-D
NOTE: Before beginning, please ensure all Prerequisites are met.
Mount the Red Hat ISO Image
1. Login to OSP-D Server.
2. Launch KVM Console.
3. Select Virtual Media > Activate Virtual Devices. Accept the session and enable remembering your setting for future connections.
4. Select Virtual Media > Map CD/DVD and map the Red Hat ISO image.
5. Select Power > Reset System (Warm Boot) to reboot the system.
6. Upon restart, press F6 and select Cisco vKVM-Mapped vDVD1.22 and press ENTER.
7. Proceed to Install RHEL.
Cisco Systems, Inc. www.cisco.com
127
Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
Install RHEL
NOTE: The procedure in this section represents a simplified version of the installation process that identifies the minimum number of parameters that must be configured.
1. Select the option to install Red Hat Enterprise Linux to begin the installation.
2. Select Software Selection > Minimum Install Only.
3. Configure Networks interfaces.
a. Select the first interface for Provisioning Networking.
b. Configure DNS Server
c. Select Manual IPv4 configuration.
d. Repeat the these steps for the external interface.
4. Select Date & Time and specify your Region and City.
5. Enable Network Time and configure NTP Servers.
6. Select Installation Destination and use ext4 file system.
NOTE:
7. Disable Kdump.
8. Set Root password only.
9. Begin the installation.
10. Proceed to Prepare Red Hat for the Undercloud Installation once the installation is complete.
Prepare Red Hat for the Undercloud Installation
1. Login to the OSP-D server using the external IP.
2. The director installation process requires a non-root user to execute commands. Use the following commands to create the user named stack and set a password:
useradd stack passwd stack
3.
echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack stack ALL=(root) NOPASSWD:ALL chmod 0440 /etc/sudoers.d/stack
4.
su – stack
5. The director uses system images and Heat templates to create the Overcloud environment. To keep these files organized, create the directories for images and templates:
mkdir ~/images 128
Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
mkdir ~/custom_templates
6. Use hostnamectl to set a hostname:
sudo hostnamectl set-hostname
7. The director also requires an entr -D - - as show below:
vi /etc/hosts
8. Complete the Red Hat registration as shown below and enable the repositories for OSP 10.
sudo subscription-manager register sudo subscription-manager list --available --all sudo subscription-manager attach --pool=
9. Perform an update on your system to make sure you have the latest base system packages and reboot the system.
sudo yum update –y sudo reboot
10. Proceed to Install the Undercloud when the reboot is complete.
Install the Undercloud
1. Login to the OSPD and execute the below command to install the required command line tools for director installation and configuration:
sudo yum install -y python-tripleoclient
2. From the set of files provided, copy the file undercloud.conf to /home/stack and make the following changes to suit the environment:
— Set undercloud_hostname
— Ensure local_interface is set to eno1
3. Install additonal packages and configures its services to suit the settings in undercloud.conf:
openstack undercloud install
NOTE: The installation and configuration also starts all OpenStack Platform services automatically.
4. Verify that the services are enabled and have been started using the following command:
sudo systemctl list-units openstack-*
5. Install the rhosp-director-images and rhosp-director-images-ipa packages containing the required disk images for provisioning overcloud nodes:
sudo yum install rhosp-director-images rhosp-director-images-ipa
129
Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
The required images these packages include are:
— An introspection kernel and ramdisk - Used for bare metal system introspection over PXE boot.
— A deployment kernel and ramdisk - Used for system provisioning and deployment.
— An overcloud kernel, ramdisk, and full image -
6. Extract the archives to the images directory in the stack
cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director- images/ironic-python-agent-latest-10.0.tar; do tar -xvf $i; done
7. For debug purposes, customize the overcloud image as follows:
sudo yum install libguestfs-tools virt-manager –y sudo systemctl start libvirtd virt-customize -a overcloud-full.qcow2 --root-password password:
8. Customize the deployment to disable chronyd:
sudo virt-customize -a overcloud-full.qcow2 --run-command "systemctl disable chronyd"
9. Import these images into the director:
openstack overcloud image upload --image-path /home/stack/images/
10. View a list of images:
openstack image list
11. List the introspection images which have been copied into /httpboot.
ls –l /httpboot
12. Configure the nameserver required by the Overcloud nodes in order for them to be able to resolve hostnames through DNS:
neutron subnet-list neutron subnet-update
13. Proceed to Deploying the Overcloud to continue.
Deploying the Overcloud
NOTE: Before beginning, please ensure all Prerequisites and that the Undercloud was successfully installed as described in Deploying the Undercloud.
Node List Import
The director requires a node definition template. This file (instackenv.json) uses the JSON format file, and contains the hardware and power management details for your nodes.
1. Transfer the instackenv.json file to the /home/stack/ directory on the OSP-D server.
130
Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
2. Edit the following parameters in the instackenv.json file as they pertain to each of the server nodes (Controllers, Computes, and OSD Computes):
— mac
— pm_user: The name of the CIMC administrative user for the server.
— pm_password: The CI
— pm_addr
NOTE: Make sure that the OSD Computes are named as osd-compute-0, osd-compute-1, osd-compute-2 as specified by default in the file. There are no separate profiles created and its based on these node names that the Overcloud installation happens.
3. Execute the following command to import the list of nodes into the OSP-D:
openstack baremetal import --json ~/instackenv.json
Check the status by executing the ironic node-list command several times until all nodes show a state of power off.
4. Assign the kernel and ramdisk images to all nodes:
openstack baremetal configure boot
5. Check the overcloud profile list for osd-computes, controller and compute:
openstack overcloud profiles list
6. Proceed to Node Introspection to continue.
Node Introspection
OSP-D can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. To enable and verify node introspection:
1. Via Ironic, use IPMI to turn the servers on, collect their properties, and record them in the Ironic database:
openstack baremetal introspection bulk start
2. Verify the nodes completed introspection without errors:
openstack baremetal introspection bulk status
3. Proceed to Install the Overcloud to continue.
Install the Overcloud
1. -D server.
Confirm that the templates are present:
cd /home/stack/custom-templates/ ls -ltr total 56 -rw-rw-r--. 1 stack stack 3327 Mar 21 14:00 network.yaml -rw-rw-r--. 1 stack stack 7625 Mar 21 14:16 custom-roles.yaml 131
Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
-rw-rw-r--. 1 stack stack 735 Mar 21 14:20 ceph.yaml -rw-rw-r--. 1 stack stack 508 Mar 21 14:28 post-deploy-template.yaml -rw-rw-r--. 1 stack stack 2283 Mar 21 14:29 numa-systemd-osd.sh -rw-rw-r--. 1 stack stack 853 Mar 21 14:29 wipe-disk.sh -rw-rw-r--. 1 stack stack 237 Mar 21 15:22 compute.yaml -rw-rw-r--. 1 stack stack 430 Mar 21 15:23 first-boot-template.yaml -rw-r--r--. 1 stack stack 2303 Mar 21 15:40 merged-first-boot.yaml -rw-r--r--. 1 stack stack 2158 Mar 21 15:41 first-boot.yaml -rw-rw-r--. 1 stack stack 3345 Mar 23 14:46 layout.yaml.3ctrl_3osds_12computes drwxrwxr-x. 2 stack stack 4096 Mar 23 15:02 nic-configs -rw-rw-r--. 1 stack stack 3105 Mar 23 17:42 layout.yaml
cd /home/stack/custom-templates/nic-configs ls -ltr total 40 -rw-rw-r--. 1 stack stack 6712 Mar 21 15:19 controller-nics.yaml -rw-r--r--. 1 stack stack 6619 Mar 21 15:35 only-compute-nics.yaml -rw-r--r--. 1 stack stack 6385 Mar 21 15:37 non-srio-compute-nics.yaml -rw-rw-r--. 1 stack stack 5388 Mar 23 00:18 compute-nics.yaml.old -rw-rw-r--. 1 stack stack 7000 Mar 23 15:02 compute-nics.yaml
Some of these files need minor edits to suit your environment as described in the steps that follow.
2. Edit the following parameters in the network.yaml file:
— ExternalNetCidr:
— ExternalAllocationPools: [{'start': '
— ExternalInterfaceDefaultRoute:
— ExternalNetworkVlanID:
3. Edit the following parameters in the layout.yaml file:
— NtpServer:
— ControllerCount:
— ComputeCount:
— CephStorageCount: 0
— OsdComputeCount:
— ControllerIPs:
— ComputeIPs:
— OsdComputeIPs:
4. Edit the following parameters in the controller-nics.yaml file:
— ExternalNetworkVlanID: > default:
— ExternalInterfaceDefaultRoute: > default: '
'5. Edit the following parameters in the only-compute.yaml file:
— ExternalNetworkVlanID: > default:
132
Appendix: Hyper-Converged Ultra M Model VIM Deployment Procedures
Obtaining Documentation and Submitting a Service Request
— ExternalInterfaceDefaultRoute: > default: '
'6. Edit the following parameters in the compute-nics.yaml file:
— ExternalNetworkVlanID: > default:
— ExternalInterfaceDefaultRoute: > default: '
'7. Verify ironic nodes are available, powered off, and available for provisioning (not in maintenance mode):
openstack baremetal node list
8. Deploy the Overcloud:
deploy-hci-sriov.sh
9. Monitor the deployment progress and look for failures in separate console window using the following command:
openstack stack resource list neutonoc -n 1
10. Once the deployment is completed, make sure the deployment was a success by verifying that all servers appear in the OpenStack list:
openstack stack list
133
Appendix: Example ultram_cfg.yaml File
The ultram_cfg.yaml file is used to configure and enable syslog proxy and event aggregation functionality within the Ultra M Manager tool. Refer to Event and Syslog Management Within the Ultra M Solution for details.
CAUTION: This is only a sample configuration file provided solely for your reference. You must create and modify your own configuration file according to the specific needs of your deployment.
#------# Configuration data for Ultra-M Health Check #------
# Health check polling frequency 15min # In order to ensure optimal performance, it is strongly recommended that # you do not change the default polling-interval of 15 minutes (900 seconds). polling-interval: 900
# under-cloud info, this is used to authenticate # OSPD and mostly used to build inventory list (compute, controllers, OSDs) under-cloud: environment: OS_AUTH_URL: http://192.200.0.1:5000/v2.0 OS_USERNAME: admin OS_TENANT_NAME: admin OS_PASSWORD: ******* prefix: neutonoc
# over-cloud info, to authenticate OpenStack Keystone endpoint over-cloud: enabled: true environment: OS_AUTH_URL: http://172.21.201.217:5000/v2.0 OS_TENANT_NAME: user1 OS_USERNAME: user1 OS_PASSWORD: ******* OS_ENDPOINT_TYPE: publicURL OS_IDENTITY_API_VERSION: 2 OS_REGION_NAME: regionOne
# SSH Key to be used to login without username/password auth-key: /home/stack/.ssh/id_rsa
# Number of OpenStack controller nodes controller_count: 3
# Number of osd-compute nodes osd_compute_count: 3
# Number of OSD disks per osd-compute node osd_disk_count_per_osd_compute: 4
# Mark "ceph df" down if raw usage exceeds this setting ceph_df_use_threshold: 80.0 Cisco Systems, Inc. www.cisco.com
134
Appendix: Example ultram_cfg.yaml File
Obtaining Documentation and Submitting a Service Request
# Max NTP skew limit in miliseconds ntp_skew_limit: 100
# Max allowed Memory Correctable Errors for Disk Controllers diskmon_cr_memory_correctable_errors_threshold: 5000
# Max allowed Memory Uncorrectable Errors for Disk Controllers diskmon_cr_memory_uncorrectable_errors_threshold: 1
# Max Disk Cache Vault Temperature Threshold diskmon_cv_battery_temparature_threshold: 55
# Max allowed Disk Physical Drive Media Error Count diskmon_pd_media_error_count_threshold: 100
# Max allowed Disk Physical Drive Other Error Count diskmon_pd_other_error_count_threshold: 25
# Max allowed Disk Physical Drive Predictive Failure Count diskmon_pd_predictive_failure_count_threshold: 2
# Raise Alarm if Memory of Root filesystem usage exceeds the threshold diskmon_fm_rootfs_memory_used_threshold: 80
# Flag to enable/disable debug message logging diskmon_debug_logging_enabled: true
# Flag to enable/disable raising alarm on error diskmon_raise_alarm_enabled: true
snmp: enabled: true identity: 'ULTRAM-SJC-BLDG-4/UTIT-TESTBED/10.23.252.159' nms-server: 172.21.201.53: community: public 10.23.252.159: community: ultram agent: community: public snmp-data-file: '/opt/cisco/usp/ultram_health.data/snmp_faults_table' log-file: '/var/log/cisco/ultram_snmp.log'
ucs-cluster: enabled: true user: admin password: ******* data-dir: '/opt/cisco/usp/ultram_health.data/ucs' log-file: '/var/log/cisco/ultram_ucs.log'
uas-cluster: enabled: true log-file: '/var/log/cisco/ultram_uas.log' data-dir: '/opt/cisco/usp/ultram_health.data/uas' autovnf: 172.21.201.53: autovnf: login: user: ubuntu password: ******* 135
Appendix: Example ultram_cfg.yaml File
Obtaining Documentation and Submitting a Service Request
netconf: user: admin password: ******* em: login: user: ubuntu password: ******* netconf: user: admin password: ******* esc: login: user: admin password: ******* 172.21.201.54: autovnf: login: user: ubuntu password: ******* netconf: user: admin password: ******* em: login: user: ubuntu password: ******* netconf: user: admin password: ******* esc: login: user: admin password: *******
#rsyslog configuration, here proxy-rsyslog is IP address of Ultra M Manager Node (NOT remote rsyslog): rsyslog: level: 4,3,2,1,0 proxy-rsyslog: 192.200.0.251
136
Appendix: Ultra M MIB
NOTE: Not all aspects of this MIB are supported in this release. Refer to Event and Syslog Management Within the Ultra M Solution for information on the capabilities supported in this release.
-- ***************************************************************** -- CISCO-ULTRAM-MIB.my -- Copyright (c) 2017 by Cisco Systems, Inc. -- All rights reserved. -- -- *****************************************************************
CISCO-ULTRAM-MIB DEFINITIONS ::= BEGIN
IMPORTS MODULE-IDENTITY, OBJECT-TYPE, NOTIFICATION-TYPE, Unsigned32 FROM SNMPv2-SMI MODULE-COMPLIANCE, NOTIFICATION-GROUP, OBJECT-GROUP FROM SNMPv2-CONF TEXTUAL-CONVENTION, DateAndTime FROM SNMPv2-TC ciscoMgmt FROM CISCO-SMI;
ciscoUltramMIB MODULE-IDENTITY LAST-UPDATED "201707060000Z" ORGANIZATION "Cisco Systems, Inc." CONTACT-INFO "Cisco Systems Customer Service
Postal: 170 W Tasman Drive San Jose, CA 95134 USA
Tel: +1 800 553-NETS" DESCRIPTION "The MIB module to management of Cisco Ultra Services Platform (USP) also called Ultra-M Network Function Virtualization (NFV) platform. The Ultra-M platform is Cisco validated turnkey solution based on ETSI(European Telecommunications Standards Institute) NFV architetcure.
It comprises of following architectural domains:
1. Management and Orchestration (MANO), these componets enables infrastructure virtualization and life cycle management of Cisco Ultra Virtual Network Functions (VNFs).
Cisco Systems, Inc. www.cisco.com
137
Appendix: Ultra M MIB
Obtaining Documentation and Submitting a Service Request
2. NFV Infrastructure (NFVI), set of physical resources to provide NFV infrastructre, for example servers, switch, chassis, and so on.
3. Virtualized Infrastructure Manager (VIM)
4. One or more Ultra VNFs.
Ultra-M platform provides a single point of management (including SNMP, APIs, Web Console and CLI/Telnet Console) for the resources across these domains within NFV PoD (Point of Delivery). This is also called Ultra-M manager throughout the context of this MIB." REVISION "201707050000Z" DESCRIPTION "- cultramFaultDomain changed to read-only in compliance. - Added a new fault code serviceFailure under 'CultramFaultCode'. - Added a new notification cultramFaultClearNotif. - Added new notification group ciscoUltramMIBNotifyGroupExt. - Added new compliance group ciscoUltramMIBModuleComplianceRev01 which deprecates ciscoUltramMIBModuleCompliance." REVISION "201706260000Z" DESCRIPTION "Latest version of this MIB module." ::= { ciscoMgmt 849 }
CFaultCode ::= TEXTUAL-CONVENTION STATUS current DESCRIPTION "A code identifying a class of fault." SYNTAX INTEGER { other(1), -- Other events networkConnectivity(2), -- Network Connectivity -- Failure Events. resourceUsage(3), -- Resource Usage Exhausted -- Event. resourceThreshold(4), -- Resource Threshold -- crossing alarms hardwareFailure(5), -- Hardware Failure Events securityViolation(6), -- Security Alerts configuration(7), -- Config Error Events serviceFailure(8) -- Process/Service failures }
CFaultSeverity ::= TEXTUAL-CONVENTION STATUS current DESCRIPTION "A code used to identify the severity of a fault." SYNTAX INTEGER { emergency(1), -- System level FAULT impacting -- multiple VNFs/Services critical(2), -- Critical Fault specific to -- VNF/Service major(3), -- component level failure within -- VNF/service. alert(4), -- warning condition for a service/VNF, -- may eventually impact service. informational(5) -- informational only, does not -- impact service 138
Appendix: Ultra M MIB
Obtaining Documentation and Submitting a Service Request
}
CFaultDomain ::= TEXTUAL-CONVENTION STATUS current DESCRIPTION "A code used to categorize Ultra-M fault domain." SYNTAX INTEGER { hardware(1), -- Harware including Servers, L2/L3 -- Elements vimOrchestrator(2), -- VIM under-cloud vim(3), -- VIM manager such as OpenStack uas(4), -- Ultra Automation Services Modules vnfm(5), -- VNF manager vnfEM(6), -- Ultra VNF Element Manager vnf(7) -- Ultra VNF } -- Textual Conventions definition will be defined before this line
ciscoUltramMIBNotifs OBJECT IDENTIFIER ::= { ciscoUltramMIB 0 }
ciscoUltramMIBObjects OBJECT IDENTIFIER ::= { ciscoUltramMIB 1 }
ciscoUltramMIBConform OBJECT IDENTIFIER ::= { ciscoUltramMIB 2 }
-- Conformance Information Definition
ciscoUltramMIBCompliances OBJECT IDENTIFIER ::= { ciscoUltramMIBConform 1 }
ciscoUltramMIBGroups OBJECT IDENTIFIER ::= { ciscoUltramMIBConform 2 }
ciscoUltramMIBModuleCompliance MODULE-COMPLIANCE STATUS deprecated DESCRIPTION "The compliance statement for entities that support the Cisco Ultra-M Fault Managed Objects" MODULE -- this module MANDATORY-GROUPS { ciscoUltramMIBMainObjectGroup, ciscoUltramMIBNotifyGroup } ::= { ciscoUltramMIBCompliances 1 }
ciscoUltramMIBModuleComplianceRev01 MODULE-COMPLIANCE STATUS current DESCRIPTION "The compliance statement for entities that support the Cisco Ultra-M Fault Managed Objects." MODULE -- this module MANDATORY-GROUPS { ciscoUltramMIBMainObjectGroup, ciscoUltramMIBNotifyGroup, ciscoUltramMIBNotifyGroupExt }
OBJECT cultramFaultDomain MIN-ACCESS read-only DESCRIPTION 139
Appendix: Ultra M MIB
Obtaining Documentation and Submitting a Service Request
"cultramFaultDomain is read-only." ::= { ciscoUltramMIBCompliances 2 }
ciscoUltramMIBMainObjectGroup OBJECT-GROUP OBJECTS { cultramNFVIdenity, cultramFaultDomain, cultramFaultSource, cultramFaultCreationTime, cultramFaultSeverity, cultramFaultCode, cultramFaultDescription } STATUS current DESCRIPTION "A collection of objects providing Ultra-M fault information." ::= { ciscoUltramMIBGroups 1 }
ciscoUltramMIBNotifyGroup NOTIFICATION-GROUP NOTIFICATIONS { cultramFaultActiveNotif } STATUS current DESCRIPTION "The set of Ultra-M notifications defined by this MIB" ::= { ciscoUltramMIBGroups 2 }
ciscoUltramMIBNotifyGroupExt NOTIFICATION-GROUP NOTIFICATIONS { cultramFaultClearNotif } STATUS current DESCRIPTION "The set of Ultra-M notifications defined by this MIB" ::= { ciscoUltramMIBGroups 3 }
cultramFaultTable OBJECT-TYPE SYNTAX SEQUENCE OF CUltramFaultEntry MAX-ACCESS not-accessible STATUS current DESCRIPTION "A table of Ultra-M faults. This table contains active faults." ::= { ciscoUltramMIBObjects 1 }
cultramFaultEntry OBJECT-TYPE SYNTAX CUltramFaultEntry MAX-ACCESS not-accessible STATUS current DESCRIPTION "An entry in the Ultra-M fault table." INDEX { cultramFaultIndex } ::= { cultramFaultTable 1 }
CUltramFaultEntry ::= SEQUENCE { cultramFaultIndex Unsigned32, cultramNFVIdenity OCTET STRING, cultramFaultDomain CFaultDomain, cultramFaultSource OCTET STRING, cultramFaultCreationTime DateAndTime, cultramFaultSeverity CFaultSeverity, cultramFaultCode CFaultCode, cultramFaultDescription OCTET STRING }
cultramFaultIndex OBJECT-TYPE SYNTAX Unsigned32 140
Appendix: Ultra M MIB
Obtaining Documentation and Submitting a Service Request
MAX-ACCESS not-accessible STATUS current DESCRIPTION "This object uniquely identifies a specific instance of a Ultra-M fault.
For example, if two separate computes have a service level Failure, then each compute will have a fault instance with a unique index." ::= { cultramFaultEntry 1 }
cultramNFVIdenity OBJECT-TYPE SYNTAX OCTET STRING (SIZE (1..512)) MAX-ACCESS read-write STATUS current DESCRIPTION "This object uniquely identifies the Ultra-M PoD on which this fault is occurring. For example, this identity can include host-name as well management IP where manager node is running, 'Ultra-M-San-Francisco/172.10.185.100'." ::= { cultramFaultEntry 2 }
cultramFaultDomain OBJECT-TYPE SYNTAX CFaultDomain MAX-ACCESS read-write STATUS current DESCRIPTION "A unique Fault Domain that has fault." ::= { cultramFaultEntry 3 }
cultramFaultSource OBJECT-TYPE SYNTAX OCTET STRING (SIZE (1..512)) MAX-ACCESS read-only STATUS current DESCRIPTION "This object uniquely identifies the resource with the fault domain where this fault is occurring. For example, this can include host-name as well management IP of the resource, 'UCS-C240-Server-1/192.100.0.1'." ::= { cultramFaultEntry 4 }
cultramFaultCreationTime OBJECT-TYPE SYNTAX DateAndTime MAX-ACCESS read-only STATUS current DESCRIPTION "The date and time when the fault was occured." ::= { cultramFaultEntry 5 }
cultramFaultSeverity OBJECT-TYPE SYNTAX CFaultSeverity MAX-ACCESS read-only STATUS current DESCRIPTION "A code identifying the perceived severity of the fault." ::= { cultramFaultEntry 6 }
cultramFaultCode OBJECT-TYPE SYNTAX CFaultCode MAX-ACCESS read-only STATUS current DESCRIPTION 141
Appendix: Ultra M MIB
Obtaining Documentation and Submitting a Service Request
"A code uniquely identifying the fault class." ::= { cultramFaultEntry 7 }
cultramFaultDescription OBJECT-TYPE SYNTAX OCTET STRING (SIZE (1..2048)) MAX-ACCESS read-only STATUS current DESCRIPTION "A human-readable message providing details about the fault." ::= { cultramFaultEntry 8 }
cultramFaultActiveNotif NOTIFICATION-TYPE OBJECTS { cultramNFVIdenity, cultramFaultDomain, cultramFaultSource, cultramFaultCreationTime, cultramFaultSeverity, cultramFaultCode, cultramFaultDescription } STATUS current DESCRIPTION "This notification is generated by a Ultra-M manager whenever a fault is active." ::= { ciscoUltramMIBNotifs 1 }
cultramFaultClearNotif NOTIFICATION-TYPE OBJECTS { cultramNFVIdenity, cultramFaultDomain, cultramFaultSource, cultramFaultCreationTime, cultramFaultSeverity, cultramFaultCode, cultramFaultDescription } STATUS current DESCRIPTION "This notification is generated by a Ultra-M manager whenever a fault is cleared." ::= { ciscoUltramMIBNotifs 2 }
END
142
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Events are assigned to one of the following severities (refer to CFaultSeverity in Appendix: Ultra M MIB):
■ emergency(1), -- System level FAULT impacting multiple VNFs/Services
■ critical(2), -- Critical Fault specific to VNF/Service
■ major(3), -- component level failure within VNF/service.
■ alert(4), -- warning condition for a service/VNF, may eventually impact service.
■ informational(5) -- informational only, does not impact service
Events are also mapped to one of the following fault codes (refer to cFaultCode in the Ultra M MIB):
■ other(1), -- Other events
■ networkConnectivity(2), -- Network Connectivity -- Failure Events.
■ resourceUsage(3), -- Resource Usage Exhausted -- Event.
■ resourceThreshold(4), -- Resource Threshold -- crossing alarms
■ hardwareFailure(5), -- Hardware Failure Events
■ securityViolation(6), -- Security Alerts
■ configuration(7), -- Config Error Events serviceFailure(8) -- Process/Service failures
The Ultra M Manager Node serves as an aggregator for events received from the different Ultra M components. These severities and fault codes are mapped to those defined for the specific components. The information in this section provides severity mapping information for the following:
■ OpenStack Events
■ UCS Server Events
■ UAS Events
Cisco Systems, Inc. www.cisco.com
143
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Obtaining Documentation and Submitting a Service Request
OpenStack Events
Component: Ceph
Failure Type Ultra M Severity Fault Code
CEPH Status is not healthy Emergency serviceFailure
One or more CEPH monitors are Emergency serviceFailure down
Disk usage exceeds threshold Critical resourceThreshold
One or more OSD nodes are down Critical serviceFailure
One or more OSD disks are failed Critical resourceThreshold
One of the CEPH monitor is not Major serviceFailure healthy.
One or more CEPH monitor restarted. Major serviceFailure
OSD disk weights not even across the resourceThreshold board.
CEPH PG is not healthy Critical serviceFailure
Component: Cinder
Failure Type Ultra M Severity Fault Code
Cinder Service is down Emergency serviceFailure
Component: Neutron
Failure Type Ultra M Severity Fault Code
One of Neutron Agent Down Critical serviceFailure
Component: Nova
Failure Type Ultra M Severity Fault Code
Compute service down Critical serviceFailure
144
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Obtaining Documentation and Submitting a Service Request
Component: NTP
Failure Type Ultra M Severity Fault Code
NTP skew limit exceeds configured Critical serviceFailure threshold.
Component: PCS
Failure Type Ultra M Severity Fault Code
One or more controller nodes are down Critical serviceFailure
Ha-proxy is down on one of the node Major serviceFailure
Galera service is down on one of the Critical serviceFailure node.
Rabbitmq is down. Critical serviceFailure
Radis Master is down. Emergency serviceFailure
One or more Radis Slaves are down. Critical serviceFailure
corosync/pacemaker/pcsd - not all Critical serviceFailure daemons active
Cluster status changed. Major serviceFailure
Current DC not found. Emergency serviceFailure
Not all PCDs are online. Critical serviceFailure
Component: Services
Failure Type Ultra M Severity Fault Code
Service is disabled. Critical serviceFailure
Service is down. Emergency serviceFailure
Service Restarted. Major serviceFailure
The following OpenStack services are monitored:
■ Controller Nodes
— httpd.service
— memcached
145
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Obtaining Documentation and Submitting a Service Request
— mongod.service
— neutron-dhcp-agent.service
— neutron-l3-agent.service
— neutron-metadata-agent.service
— neutron-openvswitch-agent.service
— neutron-server.service
— ntpd.service
— openstack-aodh-evaluator.service
— openstack-aodh-listener.service
— openstack-aodh-notifier.service
— openstack-ceilometer-central.service
— openstack-ceilometer-collector.service
— openstack-ceilometer-notification.service
— openstack-cinder-api.service
— openstack-cinder-scheduler.service
— penstack-glance-api.service
— openstack-glance-registry.service
— openstack-gnocchi-metricd.service
— openstack-gnocchi-statsd.service
— openstack-heat-api-cfn.service
— openstack-heat-api-cloudwatch.service
— openstack-heat-api.service
— openstack-heat-engine.service
— openstack-nova-api.service
— openstack-nova-conductor.service
— openstack-nova-consoleauth.service
— openstack-nova-novncproxy.service
— openstack-nova-scheduler.service
— openstack-swift-account-auditor.service
— openstack-swift-account-reaper.service
146
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Obtaining Documentation and Submitting a Service Request
— openstack-swift-account-replicator.service
— openstack-swift-account.service
— openstack-swift-container-auditor.service
— openstack-swift-container-replicator.service
— openstack-swift-container-updater.service
— openstack-swift-container.service
— openstack-swift-object-auditor.service
— openstack-swift-object-replicator.service
— openstack-swift-object-updater.service
— openstack-swift-object.service
— openstack-swift-proxy.service
■ Compute Nodes:
— ceph-mon.target
— ceph-radosgw.target
— ceph.target
— libvirtd.service
— neutron-sriov-nic-agent.service
— neutron-openvswitch-agent.service
— ntpd.service
— openstack-nova-compute.service
— openvswitch.service
■ OSD Compute Nodes
— ceph-mon.target
— ceph-radosgw.target
— ceph.target
— libvirtd.service
— neutron-sriov-nic-agent.service
— neutron-openvswitch-agent.service
— ntpd.service
— openstack-nova-compute.service
147
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Obtaining Documentation and Submitting a Service Request
— openvswitch.service
Component: Rabbitmqctl
Failure Type Ultra M Severity Fault Code
Cluster Status is not healthy Emergency serviceFailure
Component: Diskmon
Failure Type Ultra M Severity Fault Code
Battery Backup Unit is Not Healthy Emergency serviceFailure
System overview Disk Controller Emergency serviceFailure Status is Not Healthy
Controller Status is Not Healthy Emergency serviceFailure
Memory Correctable Error Count Major resourceThreshold exceeds threshold configured
Memory Uncorrectable Error Count Major resourceThreshold exceeds threshold configured
Any Offline Vd Cache Preserved is Critical serviceFailure Not Healthy
On Board Memory Size is Not Critical serviceFailure Healthy
Current Size Of Fw Cache is Not Critical serviceFailure Healthy
Battery Status is Not Healthy Emergency serviceFailure
Battery Temparature exceeds Major resourceThreshold threshold configured
Firmware Replacement Required is Critical serviceFailure Not Healthy
No Space To Cache Offload is Not Critical serviceFailure Healthy
Virtual Drive Status is Not Healthy Emergency serviceFailure
Virtual Drive Cache is Not Healthy Critical serviceFailure
Physical Drive Status is Not Healthy Emergency serviceFailure
Media Error Count exceeds threshold Major resourceThreshold configured
148
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Obtaining Documentation and Submitting a Service Request
Failure Type Ultra M Severity Fault Code
Other Error Count exceeds threshold Major resourceThreshold configured
Predictive Failure Count exceeds Major resourceThreshold threshold configured
S M A R T Alert Flagged By Drive is Critical serviceFailure Not Heathy
S M A R T Drive Status is Not Healthy Emergency serviceFailure
Root filesystem memory usage Major resourceThreshold exceeds threshold configured
Suicide timeout is detected for OSD Emergency serviceFailure
Slow Request is detected for OSD Critical serviceFailure
UCS Server Events
UCS Server events are described here: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/ts/faults/reference/ErrMess/FaultsIntroduction.html
The following table maps the UCS severities to those within the Ultra M MIB.
UCS Server Severity Ultra M Severity Fault Code
Critical Critical hardwareFailure
Info Informational hardwareFailure
Major Major hardwareFailure
Warning Alert hardwareFailure
Alert Alert hardwareFailure
Cleared Informational Not applicable
UAS Events
Failure Type Ultra M Severity Fault Code
UAS Service Failure Criticial serviceFailure*
UAS Service Recovered Informational serviceFailure*
149
Appendix: Ultra M Component Event Severity and Fault Code Mappings
Obtaining Documentation and Submitting a Service Request
Failure Type Ultra M Severity Fault Code
* serviceFailure except where the Ultra M Health Monitor is unable to connect to any of the modules. In this case, the fault code is set to networkConnectivity.
150
Appendix: Ultra M Troubleshooting
Ultra M Component Reference Documentation
The following sections provide links to troubleshooting information for the various components that comprise the Ultra M solution.
UCS C-Series Server
■ Obtaining Showtech Support to TAC
■ Display of system Event log events
■ Display of CIMC Log
■ Run Debug Firmware Utility
■ Run Diagnostics CLI
■ Common Troubleshooting Scenarios
■ Troubleshooting Disk and Raid issues
■ DIMM Memory Issues
■ Troubleshooting Server and Memory Issues
■ Troubleshooting Communication Issues
Nexus 9000 Series Switch
■ Troubleshooting Installations, Upgrades, and Reboots
■ Troubleshooting Licensing Issues
■ Troubleshooting Ports
■ Troubleshooting vPCs
■ Troubleshooting VLANs
■ Troubleshooting STP
■ Troubleshooting Routing
■ Troubleshooting Memory
■ Troubleshooting Packet Flow Issues
■ Troubleshooting PowerOn Auto Provisioning
Cisco Systems, Inc. www.cisco.com
151
Appendix: Ultra M Troubleshooting
Obtaining Documentation and Submitting a Service Request
■ Troubleshooting the Python API
■ Troubleshooting NX-API
■ Troubleshooting Service Failures
■ Before Contacting Technical Support
■ Troubleshooting Tools and Methodology
Catalyst 2960 Switch
■ Diagnosing Problems
■ Switch POST Results
■ Switch LEDs
■ Switch Connections
■ Bad or Damaged Cable
■ Ethernet and Fiber-Optic Cables
■ Link Status
■ 10/100/1000 Port Connections
■ 10/100/1000 PoE+ Port Connections
■ SFP and SFP+ Module
■ Interface Settings
■ Ping End Device
■ Spanning Tree Loops
■ Switch Performance
■ Speed, Duplex, and Autonegotiation
■ Autonegotiation and Network Interface Cards
■ Cabling Distance
■ Clearing the Switch IP Address and Configuration
■ Finding the Serial Number
■ Replacing a Failed Stack Member
Red Hat
■ Troubleshooting Director issue: https://access.redhat.com/documentation/en- us/red_hat_openstack_platform/10/html/director_installation_and_usage/chap-troubleshooting_director_issues
152
Appendix: Ultra M Troubleshooting
Obtaining Documentation and Submitting a Service Request
■ Backup and Restore Director Undercloud: https://access.redhat.com/documentation/en- us/red_hat_openstack_platform/10/html-single/back_up_and_restore_the_director_undercloud/
OpenStack
■ Red Hat Openstack Troubleshooting commands and sceanrios
UAS
Refer to the USP Deployment Automation Guide
UGP
Refer to the Ultra Gateway Platform System Administration Guide
Collecting Support Information
From UCS:
■ Collect support information:
chassis show tech support
show tech support (if applicable)
■ Check which UCS MIBS are being polled (if applicable). Refer to https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/mib/c-series/b_UCS_Standalone_C- Series_MIBRef/b_UCS_Standalone_C-Series_MIBRef_chapter_0100.html
From Host/Server/Compute/Controler/Linux:
■ Identify if Passthrought/SR-IOV is enabled.
■ Run sosreport:
NOTE: This functionality is enabled by default on Red Hat, but not on Ubuntu. It is recommended that you enable sysstat and sosreport on Ubuntu (run apt-get install sysstat and apt-get install sosreport). It is also recommended that you install sysstat on Red Hat (run yum install sysstat).
■ Get and run the os_ssd_pac script from Cisco:
— Compute (all):
./os_ssd_pac.sh -a
./os_ssd_pac.sh -k –s
NOTE: For initial collection, it is always recommanded to include the -s option (sosreport). Run ./os_ssd_pac.sh -h for more information.
— Controller (all):
153
Appendix: Ultra M Troubleshooting
Obtaining Documentation and Submitting a Service Request
./os_ssd_pac.sh -f
./os_ssd_pac.sh -c –s
NOTE: For initial collection, it is always recommanded to include the -s option (sosreport). Run ./os_ssd_pac.sh -h for more information.
■ For monitoring purposes, from crontab use option: -m ( for example run every 5 or 10 minutes)
From Switches:
■ From all switches connected to the Host/Servers. (This also includes other switchies which have same vlans terminated on the Host/Servers.)
show tech-support
syslogs
snmp traps
NOTE: It is recommended that mac-move notifications are enabled on all switches in network by running mac address-table notification mac-move.
From ESC (Active and Standby)
NOTE: It is recommended that you take a backup of the software and data before performing any of the following operations. Backups can be taken by exectuting opt/cisco/esc/esc-scripts/esc_dbtool.py backup. (Refer to https://www.cisco.com/c/en/us/td/docs/net_mgmt/elastic_services_controller/2-3/user/guide/Cisco-Elastic-Services- Controller-User-Guide-2-3/Cisco-Elastic-Services-Controller-User-Guide-2-2_chapter_010010.html#id_18936 for more information.)
/opt/cisco/esc/esc-scripts/health.sh
/usr/bin/collect_esc_log.sh
./os_ssd_pac -a
From UAS:
■ Monitor ConfD:
confd –status
confd --debug-dump /tmp/confd_debug-dump
confd --printlog /tmp/confd_debug-dump
NOTE: Once the file /tmp/confd_debug-dump is collected, it can be removed (rm /tmp/confd_debug-dump).
■ Monitor UAS Components:
source /opt/cisco/usp/uas/confd-6.1/confdrc
confd_cli -u admin -C
show uas 154
Appendix: Ultra M Troubleshooting
Obtaining Documentation and Submitting a Service Request
show uas ha-vip
show uas state
show confd-state
show running-config
show transactions date-and-time
show logs | display xml
show errors displaylevel 64
show notification stream uas_notify last 1000
show autovnf-oper:vnfm
show autovnf-oper:vnf-em
show autovnf-oper:vdu-catalog
show autovnf-oper:transactions
show autovnf-oper:network-catalog
show autovnf-oper:errors
show usp
show confd-state internal callpoints
show confd-state webui listen
show netconf-state
■ Monitor Zookeeper:
/opt/cisco/usp/packages/zookeeper/current/bin/zkCli.sh ls /config/control-function
/opt/cisco/usp/packages/zookeeper/current/bin/zkCli.sh ls /config/element-manager
/opt/cisco/usp/packages/zookeeper/current/bin/zkCli.sh ls /config/session-function
/opt/cisco/usp/packages/zookeeper/current/bin/zkCli.sh ls /
/opt/cisco/usp/packages/zookeeper/current/bin/zkCli.sh ls /stat
/opt/cisco/usp/packages/zookeeper/current/bin/zkCli.sh ls /log
■ Collect Zookeeper data:
cd /tmp
tar zcfv zookeeper_data.tgz /var/lib/zookeeper/data/version-2/
ls -las /tmp/zookeeper_data.tgz
■ Get support details
./os_ssd_pac -a 155
Appendix: Ultra M Troubleshooting
Obtaining Documentation and Submitting a Service Request
From UEM (Active and Standby):
■ Collect logs
/opt/cisco/em-scripts/collect-em-logs.sh
■ Monitor NCS:
ncs –status
ncs --debug-dump /tmp/ncs_debug-dump
ncs --printlog /tmp/ncs_debug-dump
NOTE: Once the file /tmp/ncs_debug-dump is collected, it can be removed (rm /tmp/ncs_debug-dump).
■ Collect support details:
./os_ssd_pac -a
From UGP (Through StarOS)
■ Collect the multiple outputs of the show support details.
NOTE: It is recommended to collect at least two samples, 60 minutes apart if possible.
■ Collect raw bulkstats before and after events.
■ Collect syslogs and snmp traps before and after events.
■ Collect PCAP or sniffer traces of all relevant interfaces if possible.
NOTE: Familiarize yourself with how running SPAN/RSPAN on Nexus and Catalyst switches. This is important for resolving Passthrough/SR-IOV issues.
■ Collect console outputs from all nodes.
■ Export CDRs and EDRs.
■ Collect the outputs of monitor subscriber next-call or monitor protocol depending on the activity
■ Refer to https://supportforums.cisco.com/sites/default/files/cisco_asr5000_asr5500_troubleshooting_guide.pdf for more information.
About Ultra M Manager Log Files
All Ultra M heath monitor ultram-manager
cd /var/log/cisco/ultram-manager ls –alrt
Example output:
total 116 drwxr-xr-x. 3 root root 4096 Sep 10 17:41 .. -rw-r--r--. 1 root root 0 Sep 12 15:15 ultram_health_snmp.log 156
Appendix: Ultra M Troubleshooting
Obtaining Documentation and Submitting a Service Request
-rw-r--r--. 1 root root 448 Sep 12 15:16 ultram_health_uas.report -rw-r--r--. 1 root root 188 Sep 12 15:16 ultram_health_uas.error -rw-r--r--. 1 root root 580 Sep 12 15:16 ultram_health_uas.log -rw-r--r--. 1 root root 24093 Sep 12 15:16 ultram_health_ucs.log -rw-r--r--. 1 root root 8302 Sep 12 15:16 ultram_health_os.error drwxr-xr-x. 2 root root 4096 Sep 12 15:16 . -rw-r--r--. 1 root root 51077 Sep 12 15:16 ultram_health_os.report -rw-r--r--. 1 root root 6677 Sep 12 15:16 ultram_health_os.log
NOTES:
■ The files are named according to the following conventions:
— ultram_health_os: Contain information related to OpenStack
— ultram_health_ucs: Contain information related to UCS
— ultram_health_uas: Contain information related to UAS
■ F over time and contain useful data for debugging in case of issues.
■ get created on every tun.
■ the events that causes the Ultra M health monitor to send traps out. These files are updated every time a component generates an event.
157
Appendix: Using the UCS Utilities Within the Ultra M Manager
Cisco UCS server BIOS, MLOM, and CIMC software updates may be made available from time to time.
Utilities have been added to the Ultra M Manager software to simplify the process of upgrading the UCS server software (firmware) within the Ultra M solution.
These utilities are available through a script called ultram_ucs_utils.py located in the /opt/cisco/usp/ultram-manager directory. Refer to Appendix: ultram_ucs_utils.py Help for more information on this script.
NOTES:
■ This functionality is currently supported only with Ultra M deployments based on OSP 10 and that leverage the Hyper-Converged architecture.
■ You should only upgrade your UCS server software to versions that have been validated for use within the Ultra M solution.
■ All UCS servers within the Ultra M solution stack should be upgraded to the same firmware versions.
■ Though it is highly recommended that all server upgrades be performed during a single maintenance window, it is possible to perform the upgrade across multiple maintenance windows based on Node type (e.g. Compute, OSD Compute, and Controller).
There are two upgrade scenarios:
■ Upgrading servers in an existing deployment. In the scenario, the servers are already in use hosting the Ultra M solution stack. This upgrade procedure is designed to maintain the integrity of the stack.
— Compute Nodes are upgraded in parallel.
— OSD Compute Nodes are upgraded sequentially.
— Controller Nodes are upgraded sequentially.
■ Upgrading bare metal servers. In this scenario, the bare metal servers have not yet been deployed within the Ultra M solution stack. This upgrade procedure leverages the parallel upgrade capability within Ultra M Manager UCS utilities to upgrade the servers in parallel.
To use UItra M Manager UCS utilities to upgrade software for UCS servers in an existing deployment:
1. Perform Pre-Upgrade Preparation.
2. Shutdown the ESC VMs.
3. Upgrade the Compute Node Server Software.
4. Upgrade the OSD Compute Node Server .
Cisco Systems, Inc. www.cisco.com
158
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
5. Restarting the UAS and ESC (VNFM) VMs.
6. Upgrade the Controller Node.
7. Upgrading Firmware on the OSP-D Server/Ultra M Manager Node.
To use UItra M Manager UCS utilities to upgrade software for bare metal UCS servers:
1. Perform Pre-Upgrade Preparation.
2. Upgrading Firmware on UCS Bare Metal.
3. Upgrading Firmware on the OSP-D Server/Ultra M Manager Node.
Perform Pre-Upgrade Preparation
Prior to performing the actual UCS server software upgrade, you must perform the steps in this section to prepare your environment for the upgrade.
NOTES:
■ These instructions assume that all hardware is fully installed, cabled, and operational.
■ These instructions assume that the VIM Orchestrator and VIM have been successfully deployed.
■ UCS server software is distributed separately from the USP software ISO.
To prepare your environment prior to upgrading the UCS server software:
1. Log on to the Ultra M Manager Node.
2. Create a directory called /var/www/html/firmwares to contain the upgrade files.
mkdir -p /var/www/html/firmwares
3. Download the UCS software ISO to the directory you just created.
UCS software is available for download from https://software.cisco.com/download/type.html?mdfid=286281356&flowid=71443
4. Extract the bios.cap file.
mkdir /tmp/UCSISO
sudo mount -t iso9660 -o loop ucs-c240m4-huu-
mount: /dev/loop2 is write-protected, mounting read-only
cd UCSISO/
ls EFI GETFW isolinux Release-Notes-DN2.txt squashfs_img.md5 tools.squashfs.enc firmware.squashfs.enc huu-release.xml LiveOS squashfs_img.enc.md5 TOC_DELNORTE2.xml VIC_FIRMWARE
cd GETFW/
ls getfw readme.txt 159
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
mkdir -p /tmp/HUU sudo ./getfw -s /tmp/ucs-c240m4-huu-
Nothing was selected hence getting only CIMC and BIOS FW/s available at '/tmp/HUU/ucs-c240m4-huu-
cd /tmp/HUU/ucs-c240m4-huu-
ls bios.cap
5. Copy the bios.cap and huu.iso to the /var/www/html/firmwares/ directory.
sudo cp bios.cap /var/www/html/firmwares/
ls -lrt /var/www/html/firmwares/
total 692228 -rw-r--r--. 1 root root 692060160 Sep 28 22:43 ucs-c240m4-huu-
6. Optional. If it is not already installed, install the Ultra M Manager using the information and instructions in Install the Ultra M Manager RPM.
7. Navigate to the /opt/cisco/usp/ultram-manager directory.
cd /opt/cisco/usp/ultram-manager
Once this step is completed, if you are upgrading UCS servers in an existing Ultra M solution stack, proceed to step 8. If you are upgrading bare metal UCS servers, proceed to step 9.
8. Optional. If you are upgrading software for UCS servers in an existing Ultra M solution stack, then create UCS server node list configuration files for each node type as shown in the following table.
Configuration File Name File Contents
compute.cfg A list of the CIMC IP addresses for all of the Compute Nodes.
osd_compute_0.cfg The CIMC IP address of the primary OSD Compute Node (osd-compute-0).
osd_compute_1.cfg The CIMC IP address of the second OSD Compute Node (osd-compute-1).
osd_compute_2.cfg The CIMC IP address of the third OSD Compute Node (osd-compute-2).
controller_0.cfg The CIMC IP address of the primary Controller Node (controller-0).
controller_1.cfg The CIMC IP address of the second Controller Node (controller-1).
controller_2.cfg The CIMC IP address of the third Controller Node (controller-2).
NOTE: - format:
- 192.100.0.9 - 192.100.0.10 - 192.100.0.11 - 192.100.0.12
160
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
Separate configuration files are required for each OSD Compute and Controller Node in order to maintain the integrity of the Ultra M solution stack throughout the upgrade process.
9. Optional. If you are upgrading software on bare metal UCS servers prior to deploying them as part of the Ultra M solution stack, then create a configuration file called hosts.cfg containing a list of the CIMC IP addresses for all of the servers to be used within the Ultra M solution stack except the OSP-D server/Ultra M Manager Node.
NOTE: Each address must be pre - format:
- 192.100.0.9 - 192.100.0.10 - 192.100.0.11 - 192.100.0.12
10. Create a configuration file called ospd.cfg containing the CIMC IP address of the OSP-D Server/Ultra M Manager Node.
NOTE: The address must be pre - ). The following is an example of the required format:
- 192.300.0.9
11. Validate your configuration files by performing a sample test of the script to pull existing firmware versions from all Controller, OSD Compute, and Compute Nodes in your Ultra M solution deployment.
./ultram_ucs_utils.py --cfg “
The following is an example output for a hosts.cfg file with a single Compute Node (192.100.0.7):
2017-10-01 10:36:28,189 - Successfully logged out from the server: 192.100.0.7 2017-10-01 10:36:28,190 ------Server IP | Component | Version ------192.100.0.7 | bios/fw-boot-loader | C240M4.3.0.3c.0.0831170228 | mgmt/fw-boot-loader | 3.0(3e).36 | mgmt/fw-system | 3.0(3e) | adaptor-MLOM/mgmt/fw-boot-loader | 4.1(2d) | adaptor-MLOM/mgmt/fw-system | 4.1(3a) | board/storage-SAS-SLOT-HBA/fw-boot-loader | 6.30.03.0_4.17.08.00_0xC6130202 | board/storage-SAS-SLOT-HBA/fw-system | 4.620.00-7259 | sas-expander-1/mgmt/fw-system | 65104100 ------
If you receive errors when executing the script, ensure that the CIMC username and password are correct. Additionally, verify that all of the IP addresses have been entered properly in the configuration files.
NOTE: It is highly recommended that you save the data reported in the output for later reference and validation after performing the upgrades.
12. Take backups of the various configuration files, logs, and other relevant information using the information and Ultra Services Platform Deployment Automation Guide.
13. Continue the upgrade process based on your deployment status
161
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
— Proceed to Shutdown the ESC VMs part of the Ultra M solution stack.
— Proceed to Upgrading Firmware on UCS Bare Metal been deployed as part of the Ultra M solution stack
Shutdown the ESC VMs
The Cisco Elastic Services Controller (ESC) serves as the VNFM in Ultra M solution deployments. ESC is deployed on a redundant pair of VMs. These VMs must be shut down prior to performing software upgrades on the UCS servers in the solution deployment.
To shut down the ESC VMs:
1. Login to OSP-D and make sure -
2. Run Nova list to get the UUIDs of the ESC VMs.
nova list --fields name,host,status | grep
Example output:
<--- SNIP ---> | b470cfeb-20c6-4168-99f2-1592502c2057 | vnf1-ESC-ESC- 0 | tb5-ultram-osd-compute-2.localdomain | ACTIVE | | 157d7bfb-1152-4138-b85f-79afa96ad97d | vnf1-ESC-ESC- 1 | tb5-ultram-osd-compute-1.localdomain | ACTIVE | <--- SNIP --->
3. Stop the standby ESC VM.
nova stop
4. Stop the active ESC VM.
nova stop
5. Verify that the VMs have been shutoff.
nova list --fields name,host,status | grep
Look for the entries pertaining to the ESC UUIDs.
Example output:
<--- SNIP ---> | b470cfeb-20c6-4168-99f2-1592502c2057 | vnf1-ESC-ESC- 0 | tb5-ultram-osd-compute-2.localdomain | SHUTOFF | | 157d7bfb-1152-4138-b85f-79afa96ad97d | vnf1-ESC-ESC- 1 | tb5-ultram-osd-compute-1.localdomain | SHUTOFF | <--- SNIP --->
6. Proceed to Upgrade the Compute Node Server Software.
Upgrade the Compute Node Server Software
NOTES:
162
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
■ Ensure that the ESC VMs have been shutdown according to the procedure in Shutdown the ESC VMs.
■ This procedure assumes that you are already logged in to the Ultra M Manager Node.
■ This procedure requires the compute.cfg file created as part of the procedure detailed in Perform Pre-Upgrade Preparation.
■ It is highly recommended that all Compute Nodes be upgraded using this process during a single maintenance window.
To upgrade the UCS server software on the Compute Nodes:
1. Upgrade the BIOS on the UCS server-based Compute Nodes.
./ultram_ucs_utils.py --cfg “compute.cfg” --login
Example output:
2017-09-29 09:15:48,753 - Updating BIOS firmware on all the servers 2017-09-29 09:15:48,753 - Logging on UCS Server: 192.100.0.7 2017-09-29 09:15:48,758 - No session found, creating one on server: 192.100.0.7 2017-09-29 09:15:50,194 - Login successful to server: 192.100.0.7 2017-09-29 09:16:13,269 - 192.100.0.7 => updating | Image Download (5 %), OK 2017-09-29 09:17:26,669 - 192.100.0.7 => updating | Write Host Flash (75 %), OK 2017-09-29 09:18:34,524 - 192.100.0.7 => updating | Write Host Flash (75 %), OK 2017-09-29 09:19:40,892 - 192.100.0.7 => Activating BIOS 2017-09-29 09:19:55,011 ------Server IP | Overall | Updated-on | Status ------192.100.0.7 | SUCCESS | NA | Status: success, Progress: Done, OK
NOTE: The Compute Nodes are automatically powered down after this process leaving only the CIMC interface available.
2. Upgrade the UCS server using the Host Upgrade Utility (HUU).
./ultram_ucs_utils.py --cfg “compute.cfg” --login
If the HUU script times out before completing the upgrade, the process might still be running on the remote hosts. You can periodically check the upgrade process by entering:
./ultram_ucs_utils.py --cfg “compute.cfg” --login
Example output:
------Server IP | Overall | Updated-on | Status ------192.100.0.7 | SUCCESS | 2017-10-20 07:10:11 | Update Complete CIMC Completed, SasExpDN Completed, I350 Completed, X520 Completed, X520 Completed, 3108AB-8i Completed, UCS VIC 1227 Completed, BIOS Completed, ------
3. Verify that the BIOS firmware and HUU upgrade was successful by checking the post-upgrade versions.
./ultram_ucs_utils.py --cfg “compute.cfg” --login
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
4. Set the package-c-state-limit CIMC setting.
./ultram_ucs_utils.py --mgmt set-bios --bios-param biosVfPackageCStateLimit --bios-values vpPackageC-StateLimit=C0/C1 --cfg compute.cfg --login
5. Verify that the package-c-state-limit CIMC setting has been made.
./ultram_ucs_utils.py --status bios-settings --cfg compute.cfg --login
Look for PackageCStateLimit to be set to C0/C1.
6. Modify the Grub configuration on each Compute Node.
a. Log into your first compute (compute-0) and upd processor.max_cstate=0
sudo grubby --info=/boot/vmlinuz-`uname -r` sudo grubby --update-kernel=/boot/vmlinuz-`uname -r` --args="processor.max_cstate=0 intel_idle.max_cstate=0"
b. Verify that the update was successful.
sudo grubby --info=/boot/vmlinuz-`uname -r`
processor.max_cs n the output.
c. Reboot the Compute Nodes.
sudo reboot
d. Repeat steps a through c for all other Compute Nodes.
7. Recheck all CIMC and kernel settings.
a. Log in to the Ultra M Manager Node.
b. Verify CIMC settings
./ultram_ucs_utils.py --status bios-settings --cfg compute.cfg --login
c. Verify the processor c-state.
for ip in `nova list | grep -i compute | awk '{print $12}' | sed 's/ctlplane=//g'`; do ssh heat- admin@$ip 'sudo cat /sys/module/intel_idle/parameters/max_cstate'; done
for ip in `nova list | grep -i compute | awk '{print $12}' | sed 's/ctlplane=//g'`; do ssh heat- admin@$ip 'sudo cpupower idle-info'; done
8. Proceed to Upgrade the OSD Compute Node Server .
NOTE: window, proceed to Restarting the UAS and ESC (VNFM) VMs.
Upgrade the OSD Compute Node Server Software
NOTES:
164
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
■ This procedure requires the osd_compute_0.cfg, osd_compute_1.cfg, and osd_compute_2.cfg files created as part of the procedure detailed in Perform Pre-Upgrade Preparation.
■ It is highly recommended that all OSD Compute Nodes be upgraded using this process during a single maintenance window.
To upgrade the UCS server software on the OSD Compute Nodes:
1. Move the Ceph storage to maintenance mode.
a. Log on to the lead Controller Node (controller-0).
b. Move the Ceph storage to maintenance mode.
sudo ceph status sudo ceph osd set noout sudo ceph osd set norebalance sudo ceph status
2. Optional. Shutdown the ESC VMs.
3. Log on to the Ultra M Manager Node.
4. Upgrade the BIOS on the initial UCS server-based OSD Compute Node (osd-compute-1).
./ultram_ucs_utils.py --cfg “osd_compute_0.cfg” --login
Example output:
2017-09-29 09:15:48,753 - Updating BIOS firmware on all the servers 2017-09-29 09:15:48,753 - Logging on UCS Server: 192.100.0.17 2017-09-29 09:15:48,758 - No session found, creating one on server: 192.100.0.17 2017-09-29 09:15:50,194 - Login successful to server: 192.100.0.17 2017-09-29 09:16:13,269 - 192.100.0.17 => updating | Image Download (5 %), OK 2017-09-29 09:17:26,669 - 192.100.0.17 => updating | Write Host Flash (75 %), OK 2017-09-29 09:18:34,524 - 192.100.0.17 => updating | Write Host Flash (75 %), OK 2017-09-29 09:19:40,892 - 192.100.0.17 => Activating BIOS 2017-09-29 09:19:55,011 ------Server IP | Overall | Updated-on | Status ------192.100.0.17 | SUCCESS | NA | Status: success, Progress: Done, OK
NOTE: The Compute Nodes are automatically powered down after this process leaving only the CIMC interface available.
5. Upgrade the UCS server using the Host Upgrade Utility (HUU).
./ultram_ucs_utils.py --cfg “osd_compute.cfg” --login
If the HUU script times out before completing the upgrade, the process might still be running on the remote hosts. You can periodically check the upgrade process by entering:
./ultram_ucs_utils.py --cfg “osd_compute.cfg” --login
Example output:
165
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
------Server IP | Overall | Updated-on | Status ------192.100.0.17 | SUCCESS | 2017-10-20 07:10:11 | Update Complete CIMC Completed, SasExpDN Completed, I350 Completed, X520 Completed, X520 Completed, 3108AB-8i Completed, UCS VIC 1227 Completed, BIOS Completed, ------
6. Verify that the BIOS firmware and HUU upgrade was successful by checking the post-upgrade versions.
./ultram_ucs_utils.py --cfg “osd_compute_0.cfg” --login
7. Set the package-c-state-limit CIMC setting.
./ultram_ucs_utils.py --mgmt set-bios --bios-param biosVfPackageCStateLimit --bios-values vpPackageC-StateLimit=C0/C1 --cfg osd_compute_0.cfg --login
8. Verify that the package-c-state-limit CIMC setting has been made.
./ultram_ucs_utils.py --status bios-settings --cfg osd_compute_0.cfg --login
Look for PackageCStateLimit to be set to C0/C1.
9. Modify the Grub configuration on the primary OSD Compute Node.
a. Log on to the OSD Compute (osd-compute-0) and upd processor.max_cstate=0
sudo grubby --info=/boot/vmlinuz-`uname -r` sudo grubby --update-kernel=/boot/vmlinuz-`uname -r` --args="processor.max_cstate=0 intel_idle.max_cstate=0"
b. Verify that the update was successful.
sudo grubby --info=/boot/vmlinuz-`uname -r`
processor.max_cs
c. Reboot the OSD Compute Nodes.
sudo reboot
10. Recheck all CIMC and kernel settings.
a. Verify the processor c-state.
cat /sys/module/intel_idle/parameters/max_cstate cpupower idle-info
b. Login to Ultra M Manager Node.
c. Verify CIMC settings.
./ultram_ucs_utils.py --status bios-settings --cfg osd_compute_0.cfg --login
11. Repeat steps 4 through 10 on the second OSD Compute Node (osd-compute-1).
NOTE: Be sure to use the osd_compute_1.cfg file where needed. 166
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
12. Repeat steps 4 through 10 on the third OSD Compute Node (osd-compute-2).
NOTE: Be sure to use the osd_compute_2.cfg file where needed.
13. Check the ironic node-list and restore any hosts that went into maintenance mode true state.
a. Login to OSP- -
b. Perform the check and any required restorations.
ironic node-list ironic node-set-maintenance $NODE_
14. Move the Ceph storage out of maintenance mode.
a. Log on to the lead Controller Node (controller-0).
b. Move the Ceph storage to maintenance mode.
sudo ceph status sudo ceph osd unset noout sudo ceph osd unset norebalance sudo ceph status sudo pcs status
15. Proceed to Restarting the UAS and ESC (VNFM) VMs.
Restarting the UAS and ESC (VNFM) VMs
Upon performing the UCS server software upgrades, VMs that were previously shutdown must be restarted.
To restart the VMs:
1. Login to OSP-D and make sure -
2. Run Nova list to get the UUIDs of the ESC VMs.
3. Start the AutoIT-VNF VM.
nova start
4. Start the AutoDeploy VM.
nova start
5. Start the standby ESC VM.
nova start
6. Start the active ESC VM.
nova start
7. Verify that the VMs have been restarted and are ACTIVE.
nova list --fields name,host,status | grep
Once ESC is up and running, it triggers the recovery of rest of the VMs (AutoVNF, UEMs, CFs and SFs).
167
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
8. Login to each of the VMs and verify that they are operational.
Upgrade the Controller Node Server Software
NOTES:
■ This procedure requires the controller_0.cfg, controller_1.cfg, and controller_2.cfg files created as part of the procedure detailed in Perform Pre-Upgrade Preparation.
■ It is highly recommended that all Controller Nodes be upgraded using this process during a single maintenance window.
To upgrade the UCS server software on the Controller Nodes:
1. Check the Controller Node status and move the Pacemaker Cluster Stack (PCS) to maintenance mode.
a. Login to the primary Controller Node (controller-0) from the OSP-D Server.
b. Check the state of the Controller Node Pacemaker Cluster Stack (PCS).
sudo pcs status
NOTE: Resolve any issues prior to proceeding to the next step.
c. Place the PCS cluster on the Controller Node into standby mode.
sudo pcs cluster standby
d. Recheck the Controller Node status again and make sure that the Controller Node is in standby mode for the PCS cluster.
sudo pcs status
2. Log on to the Ultra M Manager Node.
3. Upgrade the BIOS on the primary UCS server-based Controller Node (controller-0).
./ultram_ucs_utils.py --cfg “controller_0.cfg” --login
Example output:
2017-09-29 09:15:48,753 - Updating BIOS firmware on all the servers 2017-09-29 09:15:48,753 - Logging on UCS Server: 192.100.2.7 2017-09-29 09:15:48,758 - No session found, creating one on server: 192.100.2.7 2017-09-29 09:15:50,194 - Login successful to server: 192.100.2.7 2017-09-29 09:16:13,269 - 192.100.2.7 => updating | Image Download (5 %), OK 2017-09-29 09:17:26,669 - 192.100.2.7 => updating | Write Host Flash (75 %), OK 2017-09-29 09:18:34,524 - 192.100.2.7 => updating | Write Host Flash (75 %), OK 2017-09-29 09:19:40,892 - 192.100.2.7 => Activating BIOS 2017-09-29 09:19:55,011 ------Server IP | Overall | Updated-on | Status ------192.100.2.7 | SUCCESS | NA | Status: success, Progress: Done, OK
NOTE: The Compute Nodes are automatically powered down after this process leaving only the CIMC interface available.
168
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
4. Upgrade the UCS server using the Host Upgrade Utility (HUU).
./ultram_ucs_utils.py --cfg “controller_0.cfg” --login
If the HUU script times out before completing the upgrade, the process might still be running on the remote hosts. You can periodically check the upgrade process by entering:
./ultram_ucs_utils.py --cfg “controller_0.cfg” --login
Example output:
------Server IP | Overall | Updated-on | Status ------192.100.2.7 | SUCCESS | 2017-10-20 07:10:11 | Update Complete CIMC Completed, SasExpDN Completed, I350 Completed, X520 Completed, X520 Completed, 3108AB-8i Completed, UCS VIC 1227 Completed, BIOS Completed, ------
5. Verify that the BIOS firmware and HUU upgrade was successful by checking the post-upgrade versions.
./ultram_ucs_utils.py --cfg “controller_0.cfg” --login
6. Set the package-c-state-limit CIMC setting.
./ultram_ucs_utils.py --mgmt set-bios --bios-param biosVfPackageCStateLimit --bios-values vpPackageC-StateLimit=C0/C1 --cfg controller_0.cfg --login
7. Verify that the package-c-state-limit CIMC setting has been made.
./ultram_ucs_utils.py --status bios-settings --cfg controller_0.cfg --login
Look for PackageCStateLimit to be set to C0/C1.
8. Modify the Grub configuration on the primary OSD Compute Node.
a. Log on to the OSD Compute (osd-compute-0) and upd processor.max_cstate=0
sudo grubby --info=/boot/vmlinuz-`uname -r` sudo grubby --update-kernel=/boot/vmlinuz-`uname -r` --args="processor.max_cstate=0 intel_idle.max_cstate=0"
b. Verify that the update was successful.
sudo grubby --info=/boot/vmlinuz-`uname -r`
processor.max_cs
c. Reboot the OSD Compute Nodes.
sudo reboot
9. Recheck all CIMC and kernel settings.
a. Verify the processor c-state.
169
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
cat /sys/module/intel_idle/parameters/max_cstate cpupower idle-info
b. Login to Ultra M Manager Node.
c. Verify CIMC settings.
./ultram_ucs_utils.py --status bios-settings --cfg controller_0.cfg --login
10. Check the ironic node-list and restore the Controller Node if it went into maintenance mode true state.
a. Login to OSP- -
b. Perform the check and any required restorations.
ironic node-list ironic node-set-maintenance $NODE_
11. Take the Controller Node out of the PCS standby state.
sudo pcs cluster unstandby
12. Wait 5 to 10 minutes and check the state of the PCS cluster to verify that the Controller Node is ONLINE and all services are in good state.
sudo pcs status
13. Repeat steps 3 through 11 on the second Controller Node (controller-1).
NOTE: Be sure to use the controller_1.cfg file where needed.
14. Repeat steps 3 through 11 on the third Controller Node (controller-2).
NOTE: Be sure to use the controller_2.cfg file where needed.
15. Proceed to Upgrading Firmware on the OSP-D Server/Ultra M Manager Node.
Upgrading Firmware on UCS Bare Metal
NOTES:
■ This procedure assumes that the UCS servers receiving the software (firmware) upgrade have not previously been deployed as part of an Ultra M solution stack.
■ The instructions in this section pertain to all servers to be used as part of an Ultra M solution stack except the OSP- D Server/Ultra M Manager Node.
■ This procedure requires the hosts.cfg file created as part of the procedure detailed in Perform Pre-Upgrade Preparation.
To upgrade the software on the UCS servers:
1. Log on to the Ultra M Manager Node.
2. Upgrade the BIOS on the UCS servers.
170
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
./ultram_ucs_utils.py --cfg “hosts.cfg” --login
Example output:
2017-09-29 09:15:48,753 - Updating BIOS firmware on all the servers 2017-09-29 09:15:48,753 - Logging on UCS Server: 192.100.0.7 2017-09-29 09:15:48,758 - No session found, creating one on server: 192.100.0.7 2017-09-29 09:15:50,194 - Login successful to server: 192.100.0.7 2017-09-29 09:16:13,269 - 192.100.0.7 => updating | Image Download (5 %), OK 2017-09-29 09:17:26,669 - 192.100.0.7 => updating | Write Host Flash (75 %), OK 2017-09-29 09:18:34,524 - 192.100.0.7 => updating | Write Host Flash (75 %), OK 2017-09-29 09:19:40,892 - 192.100.0.7 => Activating BIOS 2017-09-29 09:19:55,011 ------Server IP | Overall | Updated-on | Status ------192.100.0.7 | SUCCESS | NA | Status: success, Progress: Done, OK
NOTE: The nodes are automatically powered down after this process leaving only the CIMC interface available.
3. Upgrade the UCS server using the Host Upgrade Utility (HUU).
./ultram_ucs_utils.py --cfg “hosts.cfg” --login
If the HUU script times out before completing the upgrade, the process might still be running on the remote hosts. You can periodically check the upgrade process by entering:
./ultram_ucs_utils.py --cfg “hosts.cfg” --login
Example output:
------Server IP | Overall | Updated-on | Status ------192.100.0.7 | SUCCESS | 2017-10-20 07:10:11 | Update Complete CIMC Completed, SasExpDN Completed, I350 Completed, X520 Completed, X520 Completed, 3108AB-8i Completed, UCS VIC 1227 Completed, BIOS Completed, ------
4. Verify that the BIOS firmware and HUU upgrade was successful by checking the post-upgrade versions.
./ultram_ucs_utils.py --cfg “hosts.cfg” --login
5. Set the package-c-state-limit CIMC setting.
./ultram_ucs_utils.py --mgmt set-bios --bios-param biosVfPackageCStateLimit --bios-values vpPackageC-StateLimit=C0/C1 --cfg hosts.cfg --login
6. Verify that the package-c-state-limit CIMC setting has been made.
./ultram_ucs_utils.py --status bios-settings --cfg hosts.cfg --login
Look for PackageCStateLimit to be set to C0/C1.
7. Recheck all CIMC and BIOS settings.
a. Log in to the Ultra M Manager Node.
171
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
b. Verify CIMC settings
./ultram_ucs_utils.py --status bios-settings --cfg hosts.cfg --login
8. tement in the network.yaml
vi network.yaml
<---SNIP---> ComputeKernelArgs: "intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=12 processor.max_cstate=0 intel_idle.max_cstate=0"
9. Modify the Grub configuration on all Controller Nodes after the VIM (Overcloud) has been deployed.
a. Log into your first Controller Node (controller-0).
ssh heat-admin@
b. Check the grubby settings.
sudo grubby --info=/boot/vmlinuz-`uname -r`
Example output:
index=0 kernel=/boot/vmlinuz-3.10.0-514.21.1.el7.x86_64 args="ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet " root=UUID=fa9e939e-9e3c-4f1c-a07c-3f506756ad7b initrd=/boot/initramfs-3.10.0-514.21.1.el7.x86_64.img title=Red Hat Enterprise Linux Server (3.10.0-514.21.1.el7.x86_64) 7.3 (Maipo)
c.
sudo grubby --update-kernel=/boot/vmlinuz-`uname -r` --args="processor.max_cstate=0 intel_idle.max_cstate=0"
d. Verify that the update was successful.
sudo grubby --info=/boot/vmlinuz-`uname -r`
Example output:
index=0 kernel=/boot/vmlinuz-3.10.0-514.21.1.el7.x86_64 args="ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet processor.max_cstate=0 intel_idle.max_cstate=0" root=UUID=fa9e939e-9e3c-4f1c-a07c-3f506756ad7b initrd=/boot/initramfs-3.10.0-514.21.1.el7.x86_64.img title=Red Hat Enterprise Linux Server (3.10.0-514.21.1.el7.x86_64) 7.3 (Maipo)
e. Reboot the Controller Node.
sudo reboot
NOTE: Do not proceed with the next step until the Controller Node is up and rejoins the cluster.
f. Repeat steps a through e for all other Controller Nodes. 172
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
10. Proceed to Upgrading Firmware on the OSP-D Server/Ultra M Manager Node.
Upgrading Firmware on the OSP-D Server/Ultra M Manager Node
To upgrade the firmware on the OSP-D server/Ultra M Manager node:
1. Open your web browser.
2. Enter the CIMC address of the OSP-D Server/Ultra M Manger Node in the URL field.
3. Login to the CIMC using the configured user credentials.
4. Click Launch KVM Console.
5. Click Virtual Media.
6. Click Add Image and select the HUU ISO file pertaining to the version you wish to upgrade to.
7. Select the ISO that you have added in the Mapped column of the Client View. Wait for the selected ISO to appear as a mapped device.
8. Boot the server and press F6 when prompted to open the Boot Menu.
9. Select the desired ISO.
10. Select Cisco vKVM-Mapped vDVD1.22, and press Enter. The server boots from the selected device.
11. Follow the onscreen instructions to update the desired software and reboot the server. Proceed to the next step once the server has rebooted.
12. Log on to the Ultra M Manager Node.
13. Set the package-c-state-limit CIMC setting.
./ultram_ucs_utils.py --mgmt set-bios --bios-param biosVfPackageCStateLimit --bios-values vpPackageC-StateLimit=C0/C1 --cfg ospd.cfg --login
14. Verify that the package-c-state-limit CIMC setting has been made.
./ultram_ucs_utils.py --status bios-settings --cfg controller.cfg --login
Look for PackageCStateLimit to be set to C0/C1.
15. processor.max_cs
sudo grubby --info=/boot/vmlinuz-`uname -r` sudo grubby --update-kernel=/boot/vmlinuz-`uname -r` --args="processor.max_cstate=0 intel_idle.max_cstate=0"
16. Verify that the update was successful.
sudo grubby --info=/boot/vmlinuz-`uname -r`
processor.max_cstate=0
17. Reboot the server.
sudo reboot 173
Appendix: Using the UCS Utilities Within the Ultra M Manager
Obtaining Documentation and Submitting a Service Request
18. Recheck all CIMC and kernel settings upon reboot.
a. Verify CIMC settings
./ultram_ucs_utils.py --status bios-settings --cfg ospd.cfg --login
b. Verify the processor c-state.
cat /sys/module/intel_idle/parameters/max_cstate cpupower idle-info
174
Appendix: ultram_ucs_utils.py Help
Enter the following command to display help for the UCS utilities available through the Ultra M Manager:
./ultram_ucs_utils.py –h
usage: ultram_ucs_utils.py [-h] --cfg CFG --login UC_LOGIN UC_LOGIN (--upgrade | --mgmt | --status | --undercloud UC_RC) [--mode] [--serial-delay SERIAL_DELAY] [--server SERVER] [--file FILE] [--protocol {http,https,tftp,sftp,ftp,scp}] [--access ACCESS ACCESS] [--secure-boot] [--update-type {immediate,delayed}] [--reboot] [--timeout TIMEOUT] [--verify] [--stop-on-error] [--bios-param BIOS_PARAM] [--bios-values BIOS_VALUES [BIOS_VALUES ...]]
optional arguments: -h, --help show this help message and exit --cfg CFG Configuration file to read servers --login UC_LOGIN UC_LOGIN Common Login UserName / Password to authenticate UCS servers --upgrade Firmware upgrade, choose one from: 'bios': Upgrade BIOS firmware version 'cimc': Upgrade CIMC firmware version 'huu' : Upgrade All Firmwares via HUU based on ISO --mgmt Server Management Tasks, choose one from: 'power-up' : Power on the server immediately 'power-down' : Power down the server (non-graceful) 'soft-shut-down': Shutdown the server gracefully 'power-cycle' : Power Cycle the server immediately 'hard-reset' : Hard Reset the server 'cimc-reset' : Reboot CIMC 'cmos-reset' : Reset CMOS 'set-bios' : Set BIOS Parameter --status Firmware Update Status: 'bios-upgrade' : Last BIOS upgrade status 'cimc-upgrade' : Last CIMC upgrade status 'huu-upgrade' : Last ISO upgrade via Host Upgrade Utilties 'firmwares' : List Current set of running firmware versions 'server' : List Server status 'bios-settings' : List BIOS Settings --undercloud UC_RC Get the list of servers from undercloud --mode Execute action in serial/parallel --serial-delay SERIAL_DELAY Delay (seconds) in executing firmware upgrades on node in case of serial mode
Firmware Upgrade Options:: --server SERVER Server IP hosting the file via selected protocol --file FILE Firmware file path for UCS server to access from file server --protocol {http,https,tftp,sftp,ftp,scp} Protocol to get the firmware file on UCS server --access ACCESS ACCESS UserName / Password to access the file from remote server using https,sftp,ftp,scp --secure-boot Use CIMC Secure-Boot. --update-type {immediate,delayed} Cisco Systems, Inc. www.cisco.com
175
Appendix: ultram_ucs_utils.py Help
Obtaining Documentation and Submitting a Service Request
Update type whether to send delayed update to server or immediate --reboot Reboot CIMC before performing update --timeout TIMEOUT Update timeout in mins should be more than 30 min and less than 200 min --verify Use this option to verify update after reboot --stop-on-error Stop the firmware update once an error is encountered
BIOS Parameters configuratioon: --bios-param BIOS_PARAM BIOS Paramater Name to be set --bios-values BIOS_VALUES [BIOS_VALUES ...] BIOS Paramater values in terms of key=value pair separated by space
176