OpsBridge Container Deployment Troubleshooting Toolkit

208421

Micro Focus Micro Focus Community

App Support Tiers

MICRO FOCUS SUPPORTED

Support via Micro Focus Software Support, with a ticket filed against the associated product.

PARTNER

Micro Focus offers a content partnership program for select partners. Support for Partner Content offerings is provided by the partner and not by Micro Focus of the Micro Focus community.

MICRO FOCUS COMMUNITY

Micro Focus Community Content is provided by Micro Focus for the benefit of customers, support for it is not available via Micro Focus Software Support but through specific community content forums.

COMMUNITY

Community Contributed Content is provided by Micro Focus customers and supported by them.

Micro Focus | Micro Focus Community

A supportability toolkit for troubleshooting OpsBridge Suite Container Deployments. Toolkit has tools for log collection from different capabilities, requirement check tool to validate suite environment, Vertica and Postgres DB validation and CLI aliases.
330 downloads

Description

########## OpsBridge Suite Troubleshooting Toolkit ReadMe.txt #########
########## Version 1.6 09-20-2019 (2018.11/2019.02/2019.05/2019.08) #########

Extract "opsb-suite-checker-sept-20.tar.gz" to a temporary location (e.g. /tmp), on a OpsB suite Master node. The current version of toolkit has these tools included.
Special Notes: If the files are extracted directly on Linux node, the file permissions are already set appropriately as part of the archive. If the files are extracted on Windows server and transferred via ftp, please set 'binary mode' for transfer. If the files are transferred in 'ASCII mode', please use the "dos2unix" command to convert CRLF to Unix style LFs. Post transfer, provide exec permissions to the all the shell scripts (.sh files) using "set +x".

1. LogGrabber for various suite capabilities

Description: Use this tool to collect log files from customer environment for offline troubleshooting. Log files from different capabilities are captured and compressed, to ship it to Micro Focus. Support or R&D can review the log files to understand the potential problem. Usage: Execute the command opsb_loggrabber.sh to collect logs.

#cd loggrabber
#opsb_loggrabber.sh -l|--log [Option].
Usage: <Loggrabber Capability> : Creates support dump for selected Capability. -h : print this help.
Supported Capabilities:
cdf : CDF logs (This option will run the CDF support_dump tool).
obm : OBM logs (This option will run OBM's LogGrabber tool on the omi pods).
bvd : BVD logs (This will collect the BVD logs from NFS)
iidl : Data Lake logs (This will collect the COSO logs from NFS)
cs : Virtualization Collector logs (This will collect the Virtualization Collector logs from NFS)
pm : Performance Management logs (This will collect the Performance Manager logs from NFS)
vertica : Vertica DB analyzer (This will run opsb_vertica_health_check.sh, which collect Vertica performance data)
all : For log files of all capabilities (This option will run and collect logs from all above capabilities).

Examples:
opsb_loggrabber.sh -l|--log obm
opsb_loggrabber.sh -l|--log bvd
opsb_loggrabber.sh -l|--log all

Output: The tool will create a zip file including all the log files collected from different Pods/Master and Worker nodes. Output file will be: "opsb_suite_support_data.<date>.tar.gz" Parameters: cdf: When user selects option "cdf" Or "all", log files and pod details related to ITOM platform are archived in the output files. This would require username/password to management portal. Also, it will prompt to enter a password, with which the archive file are encrypted. This password need to be remembered to extract the files. To extract the .aes file created as part of please use the command : "dd if=<aes_filename> |openssl aes-256-ecb -d -k <password> |tar zxf -" replace <aes_filename> and <password> with appropriate values in the above command. vertica: This will collect stats from vertica db. This option can be used from OpsBridge Master node Or external Vertica DB server. 2. Vertica health check tool:

Description: This tool can be used to evaluate Vertica DB health. This tool can be run independently on a Vertica DB server Or a server with vsql client installed. If the tool is run on a master node, tool will identify if Vertica DB is running a Pod and check the health of the Vertica db inside the Pod. (Note: It’s not recommended for production setup to run itom Vertica DB in a Pod). This will prompt Vertica Server, user details to connect to the Vertica DB.

Usage:
#cd loggrabber
#opsb_vertica_health_check.sh
This will prompt for Postgres DB names used for OBM. Also, user details to access the DB Server and its configuration.

3. Requirement check tool

Description: This tool verifies if the system meets the requirements for given system type. In current version it checks the requirements like: For OpsBridge: *firewall must be disabled *Chrony service must be enabled and *Check if required ports are available. *Kernel parameters *Disk speed *Swap settings *Check required rpm packages' installation. For Vertica, *firewall must be disabled *Chrony service must be enabled and *Disk speed and other I/O settings *Check required rpm packages' installation. *Swap settings
Usage:
#cd opsb_reqcheck
#opsb_reqcheck.sh -type OPSB

opsb_reqcheck.sh -type [OPSB|VERTICA] [-cfg <config>]
opsb_reqcheck.sh -ver
opsb_reqcheck.sh [-? | -help]

Sample output is included in the sample_output folder. Note: If Additional Ports Or rpm packages need to be added or edited, ---- they can be added inside opsb_reqcheck_conf.xml file, in the corresponding section.

4. Postgres requirement check:

Description: This tool can be used to evaluate requirements for External Postgres database. Postgres db config file will evaluated for valid settings.

Usage:
#cd opsb_reqcheck
#opsb_extpginfo.sh
This will prompt for Postgres DB names used for OBM. Also, user details to access the DB Server and its configuration.


5. Resource Check tool

Description: This tool is developed by SMAX team. Many a times knowing POD statistics would help us understanding bottlenecks. This tool reads POD resource usage, and color code the details per threshold values given. This requires Python 2.x (2.7 or above) to run. Usage:
#cd resourceinfo
#python check_resources.py (For pods under all namespace)

6. OpsBridge Pod Info tool

Description: This tool is developed by TS team. Get information about kubernetes pods and nodes associated with OpsB. Use this utility to check on the status of the pods for OpsBridge.
This requires Python 2.x (2.7 or above) to run. This requires Python 2.x (2.7 or above) to run. Usage:
#cd resourceinfo
#python opsb-podinfo.py (For pods under all OpsBridge namespace)

7. OpsBridge Node Health tool

Description: This tool is developed by TS team. Show basic health information for nodes in a cluster. Usage:
#cd resourceinfo
#python opsb-nodehealth.py (For all the nodes).

8. Simplified Containerized Commands

Description: Its been cumbersome to memorize, type and run log kubectl command. The "k8s" alias file introduces aliases for quicker typing of command. Please refer k8s_alias_usage.txt for sample output.

The alias/k8s file in the folder can be used for loading aliases into the shell to help with supportability of containerized environments. For example, instead of running the following command to list all pods kubectl get pods –opsbridge<-hash> -o wide The source file allows you to simply run: opbs-getpods To use this functionality a file (e.g. k8s) is copied to the master/worker node and then loaded with the source command.

Usage:
#cd alias
#source k8s
Once the command is loaded the aliases within the k8s file can then be used:
OpsBridge
opsb-getns = Get opsbridge namespace
opsb-watch = Watch opsbridge pods
opsb-exec0 = Exec bash shell into omi-0 pod
opbs-getpods = Get opsbridge pods
opsb-scale-down = Scale omi deployment down
opsb-scale-up = Scale single omi deployment up
opsb-scale-upha = Scale HA omi deployment up CDF
cdf-clusperf = List CPU and Memory utilization of cluster nodes
cdf-desc-nodes = Describe nodes cdf-version = List CDF version
cdf-watch-all = Watch all pods
cdf-get-pods = Get all pods
cdf-exec-idm = Exec bash shell into idm pod
cdf-problem-pods = List pods which are not running/completed

To see what command will be executed behind the alias examine the k8s file or type ‘alias’ More complicated commands can be used for to automatically insert the correct pod-hash / namespace-hash name as well and examples can be seen in the k8s file. Consensus on the format of the alias commands seems to suggest that commands with dashes between the words would be preferred over case-sensitive names. Once a list is built then the idea is to have the relevant teams (CDF, OpsB, SMAX, etc) build their own source files and have them embedded with the product and installed upon the installation of CDF.

Releases

Release
Size
Date
OpsBridge Suite Container Deployment Troubleshooting Toolkit 1.6
101.0 KB
  |  
Sep 24, 2019
More info Less info
Product compatibility
Operations Bridge (OPSB) Suite
Version 2019.08 · 2019.02 · 2019.05
Version 2018.02 · 2018.05 · 2018.08 · 2018.11
Release notes

########## OpsBridge Suite Container Deployment Troubleshooting Toolkit ReadMe.txt #########
########## Version 1.6 09-24-2019 (2018.11/2019.02/2019.05/2019.08) #########

Extract "opsb-suite-checker-<ver>.tar.gz" to a temporary location (e.g. /tmp), on a OpsB suite Master node. The current version of toolkit has these tools included.
Special Notes: If the files are extracted directly on Linux node, the file permissions are already set appropriately as part of the archive. If the files are extracted on Windows server and transferred via ftp, please set 'binary mode' for transfer. If the files are transferred in 'ASCII mode', please use the "dos2unix" command to convert CRLF to Unix style LFs. Post transfer, provide exec permissions to the all the shell scripts (.sh files) using "set +x".
1. LogGrabber for various suite capabilities

Description: Use this tool to collect log files from customer environment for offline troubleshooting. Log files from different capabilities are captured and compressed, to ship it to Micro Focus. Support or R&D can review the log files to understand the potential problem.

Usage:
#cd loggrabber
#opsb_loggrabber.sh -l|--log [Option].

Options: -l|--log <Loggrabber Capability> : Creates support dump for selected Capability.
-h : print this help.
-------------------------------------------------------------------------------------
Supported Capabilities:
cdf : CDF logs
obm : OBM logs
bvd : BVD logs
iidl : COSO Data Lake logs
cs : Collection Services logs
pm : Performance Management logs
vertica : Vertica DB analyzer
all : For log files of all capabilities

Examples: .
/loggrabber/opsb_loggrabber.sh -l|--log obm
./loggrabber/opsb_loggrabber.sh -l|--log all
cdf : CDF logs (This option will run the CDF support_dump tool).
obm : OBM logs (This option will run OBM's LogGrabber tool on the omi pods).
bvd : BVD logs (This will collect the BVD logs from NFS)
iidl : Data Lake logs (This will collect the COSO logs from NFS)
cs : Collection Services logs (This will collect the Collect Once Collection services logs from NFS)
pm : Performance Management logs (This will collect the Performance Manager logs from NFS)
vertica : Vertica DB analyzer (This will run opsb_vertica_health_check.sh, which collect Vertica performance data).
It is recommended using External Vertica as the store for all production purposes. The Vertica container bundled with the suite should be used only for non-production deployments like POC or test beds only. This option needs vsql client to connect to Vertica DB. If the vsql is not found, it will prompt user to enter the location. If vsql is not present or not known, press 'enter', to proceed. To run it on external Vertica server, copy the tool opsb_vertica_health_check.sh to Vertica DB and run it there. all : For log files of all capabilities (This option will run and collect logs from all above capabilities). Examples:
opsb_loggrabber.sh -l|--log obm
opsb_loggrabber.sh -l|--log bvd
opsb_loggrabber.sh -l|--log all
Output: The tool will create a zip file including all the log files collected from different Pods/Master and Worker nodes. Output file will be: "opsb_suite_support_data.<date>.tar.gz" Parameters: cdf: When user selects option "cdf" Or "all", log files and pod details related to ITOM platform are archived in the output files. This would require username/password to management portal. Also, it will prompt to enter a password, with which the archive file are encrypted. This password need to be remembered to extract the files. To extract the .aes file created as part of please use the command : "dd if=<aes_filename> |openssl aes-256-ecb -d -k <password> |tar zxf -" replace <aes_filename> and <password> with appropriate values in the above command. vertica: This will collect stats from vertica db. This option can be used from OpsBridge Master node Or external Vertica DB server.

2. Vertica health check tool:

Description: This tool can be used to evaluate Vertica DB health. This tool can be run independently on a Vertica DB server Or a server with vsql client installed. If the tool is run on a OpsBridge master node, tool will identify if Vertica DB is running as containerized version and check the health of the Vertica db inside the Pod. (Note: It’s not recommended for production setup to run itom Vertica DB inside a Pod). To connect to the external Vertica DB, the tool will prompt Vertica Server, user details to connect to the Vertica DB. If vsql is not present or not known, press 'enter', to proceed. To run it on external Vertica server,
copy the tool opsb_vertica_health_check.sh to Vertica DB and run it there.This will prompt Vertica Server, user details to connect to the Vertica DB.

Usage:
#cd loggrabber
#./opsb_vertica_health_check.sh
This will prompt for Postgres DB names used for OBM. Also, user details to access the DB Server and its configuration.

Output: Creates result files: /tmp/VerticaCOSOInfo*.html, /tmp/COSOData*.csv and /tmp/VerticaHealth*.csv Sample output is included in the sample_output folder.

3. Requirement check tool

Description: This tool verifies if the system meets the requirements for given system type. In current version it checks the requirements like: For OpsBridge: *firewall must be disabled *Chrony service must be enabled and *Check if required ports are available. *Kernel parameters *Disk speed *Swap settings *Check required rpm packages' installation. For Vertica, *firewall must be disabled *Chrony service must be enabled and *Disk speed and other I/O settings *Check required rpm packages' installation. *Swap settings

Usage:
#cd opsb_reqcheck
#opsb_reqcheck.sh -type OPSB

opsb_reqcheck.sh -type [OPSB|VERTICA] [-cfg <config>]
opsb_reqcheck.sh -ver
opsb_reqcheck.sh [-? | -help]

Sample output is included in the sample_output folder. Note: If Additional Ports Or rpm packages need to be added or edited, ---- they can be added inside opsb_reqcheck_conf.xml file, in the corresponding section.

Output: Output is printed on the console, with PASS/FAIL status for different requirements. Sample output is included in the sample_output folder. Note: If Additional Ports Or rpm packages need to be added or edited, ---- they can be added inside opsb_reqcheck_conf.xml file, in the corresponding section.


4. Resource Check tool

Description: This tool is developed by SMAX team. Many a times knowing POD statistics would help us understanding bottlenecks. This tool reads POD resource usage, and color code the details per threshold values given. This requires Python 2.x (2.7 or above) to run. Usage:
#cd resourceinfo
#python check_resources.py (For pods under all namespace)

Output:
Sample output is included in the sample_output folder.
Results will be stored under the file: resource_report.csv

5. OpsBridge Pod Info tool

Description: This tool is developed by TS team. Get information about kubernetes pods and nodes associated with OpsB. Use this utility to check on the status of the pods for OpsBridge.
This requires Python 2.x (2.7 or above) to run. This requires Python 2.x (2.7 or above) to run. Usage:
#cd resourceinfo
#python opsb-podinfo.py (For pods under all OpsBridge namespace)

Output: Output is printed on the console, sample output is included in the sample_output folder.

6. Postgres requirement check:

Description: This tool can be used to evaluate requirements for External Postgres database. Postgres db config file will evaluated for valid settings.

Usage:
#cd opsb_reqcheck
#opsb_extpginfo.sh
This will prompt for Postgres DB names used for OBM. Also, user details to access the DB Server and its configuration.

Output: Output is printed on the console, sample output is included in the sample_output folder.

7. OpsBridge Node Health tool

Description: This tool is developed by TS team. Show basic health information for nodes in a cluster. Usage:
#cd resourceinfo
#python opsb-nodehealth.py (For all the nodes).

Output: Output is printed on the console, sample output is included in the sample_output folder.

8. Simplified Containerized Commands

Description: Its been cumbersome to memorize, type and run log kubectl command. The "k8s" alias file introduces aliases for quicker typing of command. Please refer k8s_alias_usage.txt for sample output.

The alias/k8s file in the folder can be used for loading aliases into the shell to help with supportability of containerized environments. For example, instead of running the following command to list all pods kubectl get pods –opsbridge<-hash> -o wide The source file allows you to simply run: opbs-getpods To use this functionality a file (e.g. k8s) is copied to the master/worker node and then loaded with the source command.

Usage:
#cd alias
#source k8s
Once the command is loaded the aliases within the k8s file can then be used: OpsBridge opsb-getns = Get opsbridge namespace
opsb-watch = Watch opsbridge pods
opsb-exec0 = Exec bash shell into omi-0 pod
opsb-exec1 = Exec bash shell into omi-1 pod
opsb-logs0 = Follow omi-0 logs
opsb-getpods = Get opsbridge pods
opsb-scale-down = Scale omi deployment down
opsb-scale-up = Scale single omi deployment up
opsb-scale-upha = Scale HA omi deployment up
opsb-down = Set runlevel to DOWN for the opsbridge namespace. Stops all pods in namespace.
opsb-up = Set runlevel to UP for the opsbridge namespace. Starts all pods in namespace.
opsb-oomkilled = Shows which pods have reached Out Of Memory
opsb-containers = List all containers for all pods in the 'opsbridge' namespace
opsb-shell = Provide a list of pods to choose and then containers within the chosen pod to exec into. Will default to bash or accepts parameter of either /bin/sh or /bin/bash
Usage: opsb-shell [/bin/sh | /bin/bash]
Examples:
opsb-shell /bin/sh
opsb-shell /bin/bash opsb-execp = Bash into a pod listing the containers so it is possible to choose which container.
Usage opsb-exec <pod-name>
Examples:
opsb-exec itom-di-vertica-dpl-7cf5578984-6nqfv
opsb-exec omi-0 opsb-exec = Bash into a pod.
Usage: opsb-exec <pod-name> <container-name>
The complete <pod-name> does not need to be provided, only enough to uniquely find it.
Examples:
opsb-exec ucmdb ucmdb
opsb-exec omi-0 omi
opsb-exec itom-di-vertica itom-di-vertica-cnt opsb-ls-cont = List containers for a specific pod (not including install containers)
Usage: opsb-ls-cont <pod_name>
Examples:
opsb-ls-cont omi-0
opsb-ls-cont bvd-controller-deployment-7fc7c7969-llbfs opsb-logs = Displays logs by providing a list of pods and then containers so it is possible to choose which logs to show more easily. opsb-logs2 = Show default logs for a specific pod.
Usage: opsb-logs <pod_name>
Examples:
opsb-logs omi-0
opsb-logs bvd-controller-deployment-7fc7c7969-llbfs CDF
+++
cdf-clusperf = List CPU and Memory utilization of cluster nodes
cdf-desc-nodes = Describe nodes
cdf-version = List CDF version
cdf-watch = Watch core pods
cdf-watch-all = Watch all pods
cdf-getpods = Get all pods
cdf-exec-idm = Exec bash shell into idm pod
cdf-problem-pods = List pods which are not running/completed
cdf-containers = List all containers for all pods in the 'core' namespace
Examples
++++++++
kubectl delete pod <pod_nmae> -n `opsb-getns`
kubectl logs <pod_nmae> -c <container_name> -n `opsb-getns`

To see what command will be executed behind the alias examine the k8s file or type ‘alias’ More complicated commands can be used for to automatically insert the correct pod-hash / namespace-hash name as well and examples can be seen in the k8s file. Consensus on the format of the alias commands seems to suggest that commands with dashes between the words would be preferred over case-sensitive names. Once a list is built then the idea is to have the relevant teams (CDF, OpsB, SMAX, etc) build their own source files and have them embedded with the product and installed upon the installation of CDF.

Languages
English
opsb-suite-checker 1.4
71.9 KB
  |  
May 16, 2019
More info Less info
Product compatibility
Operations Bridge (OPSB) Suite
Version 2018.02 · 2018.05 · 2018.08 · 2018.11
Version 2019.02
Release notes

Please refer ReadMe.txt

Languages
English

Unsubscribe from notifications

You are receiving release updates for this item because you have subscribed to the following products:
If you unsubscribe, you will no longer receive any notifications for these products.
Tip: to update your subscription preferences, go to Manage Subscriptions from your Dashboard, uncheck the products you no longer want to receive notifications for, and click 'Save'.

Marketplace Terms of Service

In order to continue, you must accept the Marketplace Terms of Service
Since you are downloading an app from the Micro Focus unified Marketplace using an Access Manager account, you need to also accept the Micro Focus Marketplace Terms of Service before you can continue. Use the link to review the Marketplace Terms of Service. Once complete check the, "I accept the Marketplace Terms of Service" box below and click accept to continue your download.

Your browser is not supported!

Please upgrade to one of the following broswers: Internet Explorer 11 (or greater) or the latest version of Chrome or Firefox

release-rel-2019-11-1-1752 | Wed Nov 13 23:32:31 PST 2019