3.15.6 Product plugin
Product plugin is a component that acts as a federated feature of JP1/IM - Manager (Intelligent Integrated Management Base) to manage integrated agent configuration, performance-data, and JP1 events.
After product plugin ships with plug-in functions and sets up product plugin and integrated agent, JP1/IM - Manager (Intelligent Integrated Management Base) allows you to:
-
In the Operating Status area of the integrated operation viewer, you can display the IM management node that you are monitoring.
-
In the integrated operation viewer, you can view a chart of the performance data collected by integrated agent on the Trends tabbed page.
-
You can use REST API in JP1/IM - Manager to retrieve the performance data collected by integrated agent.
-
You can view JP1/IM - Agent's integrated agent issued alert JP1 events as events for IM managed node in the integrated operation viewer's Events area.
JP1/IM - Agent provides the following product plugin:
|
Product plugin |
Function |
|---|---|
|
jp1pccs.js |
Exporter Administration Features |
|
jp1pccs_azure.js |
Azure Monitor performance data collection function |
|
jp1pccs_kubernetes.js |
Container monitoring feature with user-defined Prometheus |
- Organization of this subsection
(1) Creating IM management node
Obtain the following configuration that JP1/IM - Agent manages to view as IM management node of integrated operation viewer in JP1/IM - Manager:
-
Host
-
Prometheus server
-
Alertmanager
-
Exporter
-
Fluentd
-
User-defined Exporter (connects to Prometheus server provided by JP1/IM - Agent)
It also obtains the following configurations that connect directly to JP1/IM - Manager (Intelligent Integrated Management Base):
-
Host
-
User-defined Prometheus server
-
User-defined Alertmanager
-
User-defined Exporter (connected to user-defined Prometheus server)
-
User-defined Fluentd
Creates an IM managed node-related file for JP1/IM - Agent when you execute jddcreatetree command # of JP1/IM - Manager or the Generate Tree Info function in or WebGUI (integrated operation viewer). After executing jddcreatetree command #, you can use jddupdatetree command #, or execute IM management node creation function in WebGUI to reflect the configuration of JP1/IM - Agent in the content displayed in integrated operation viewer tree.
- #
-
For details about each command, see the appropriate command in Chapter 1 Commands in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
Unlike Yet another cloudwatch exporter, the Azure Monitor performance data collection capability and the container monitoring capability through user-defined Prometheus do not have to be tagged on their monitoring target, Azure or Kubernetes.
-
Azure Monitor performance data collection
The resource_uri label obtained from Promitor contains resource names, and therefore, you must relabel them to jp1_pc_nodelabel in the Prometheus configuration file (jpc_prometheus_server.yml).
In addition, no tagging is required, and thus all the resources are obtained by default if you use Promitor Resource Discovery. If you want to obtain limited resources, specify in the Promitor configuration file which resources you want to obtain.
For details on how to change monitoring targets, see 1.21.2(8) Set up of Promitor (d) Configuring scraping targets (required) in the JP1/Integrated Management 3 - Manager Configuration Guide.
-
Container monitoring through user-defined Prometheus
Particular labels of metrics contain resource names, and therefore you must relabel them to jp1_pc_nodelabel in the Prometheus configuration file (jpc_prometheus_server.yml).
In addition, no tagging is required, and thus all the resources are obtained by default. If you want to obtain limited resources, specify in the Prometheus configuration file (jpc_prometheus_server.yml) which resources you want to obtain.
For details on how to change monitoring targets, see 1.21.2(11) Setting up container monitoring (e) Configuring scraping targets (optional) in the JP1/Integrated Management 3 - Manager Configuration Guide.
-
VMware performance data collection function
Node creation is done using a unique metric because the vmware_exporter unique metric has a VMware ESXi hostname and VM name.
For details about IM management node-related files, see 3.5.3 IM management node-related files.
If JP1/IM - Manager is in a hierarchical configuration and the Integrated Manager performs IM Managed Node Creation Perform feature, it also retrieves lower manager configuration.
(a) Type of SID of the target configuration and its available functions
The format of the configuration SID that corresponds to IM managed node that JP1/IM - Agent creates depends on the type of add-on program that you use to monitor.
For each type of exporter, the SID type of configuration information and the functions that can be used are shown below.
|
Configuration information SID type |
Description |
Available functions |
|
|---|---|---|---|
|
Host |
Integrated agent host SID |
SID of the host where JP1/IM agent control base is set up. |
None (appears in the tree hierarchy) |
|
Prometheus host SID |
SID of the host on which the Prometheus server is set up. |
||
|
Agent host SID |
The SID of the monitored host. For exporters monitoring their own hosts, this is the SID of the host on which the exporter is set up. For exporters to be monitored remotely, this is the SID of the host to be monitored. |
||
|
Remote monitoring host SID |
The SID of the host on which the Exporter to be monitored remotely is set up. |
||
|
EC2 host SID |
The SID of the host that exists as an instance on EC2 in AWS. |
||
|
Log Monitor Host SID |
SID of the host where Fluentd is set up. |
||
|
Azure VM host SID |
SID of the host that exists as an Azure VM. |
||
|
Management Applications Categories |
JP1/IM agent control base SID |
SID of JP1/IM agent control base. |
Related node Properties View |
|
PrometheusSID |
The SID of the Prometheus server. If the health status of the Prometheus server is monitored in the life-and-death monitoring settings for JP1/IM - Agent processes, JP1 events issued when the Prometheus server is stopped are registered to this IM management node. |
||
|
AlertmanagerSID |
The SID of the Alertmanager. If the health status of Alertmanager is monitored in the life-and-death monitoring settings for JP1/IM - Agent processes, JP1 events issued when Alertmanager is stopped are registered to this IM management node. |
||
|
Agent Service SID |
The SID for operational management to be monitored. JP1 events issued when the exporter monitoring its host is stopped are registered to this IM management node. |
||
|
Remote Monitoring Service SID |
This is the SID for the operation management of the exporter to be monitored remotely. JP1 events issued when the exporter to be monitored remotely is stopped are registered with this IM management node. |
||
|
Log Monitor Service SID |
SID for operation and administration of Fluentd. Registers JP1 events to this IM management node that are issued when Fluentd is stopped. |
||
|
Integrated agent Categories |
Agent SID |
The SID to be monitored. JP1 events for alerts that monitor performance data and JP1 events issued when the host is stopped when the host is stopped are registered to this IM management node when the operating status of the host is monitored in the settings of life-and-death monitoring for the host. |
Display trend information, display properties of related nodes |
|
Remote Agent SID |
The SID to monitor remotely. JP1 events for alerts that monitor performance data and JP1 events issued when the host is stopped when the host is stopped are registered to this IM management node when the operating status of the host is monitored in the settings of life-and-death monitoring for the host. |
||
|
SID of logging targets |
This is SID of the application or OS to which the monitoring target is logged. Registers JP1 events converted from the information to be output to the log file of the monitoring target and the information to be output to the event log of Windows to this IM management node. |
||
|
Platform Categories |
CloudWatchSID for EC2 |
The SID to be monitored by EC2. When monitoring with Yet another cloudwatch exporter, if an instance exists in EC2 in AWS and EC2 metrics are described in the metric definition file, create this IM management node for each metric for each instance. JP1 events corresponding to alerts monitoring EC2 instances are registered to this IM management node. Note that IM management nodes are not created for each image (AMI) ID, instance type, or entire instance. |
Display trend information, display properties of related nodes |
|
AzureMonitorSID of an Azure VM |
SID of an Azure VM as a monitoring target. When Promitor is used for monitoring, this IM management node is created if the Azure VM monitoring target has an instance and VM metrics are defined in the metric definition file. JP1 events that correspond to alerts for monitoring VM instances are associated with this IM management node. |
Show trend data, show properties of related nodes |
|
|
Object Root Node |
CloudWatchSID other than EC2 |
The SID to be monitored other than EC2. If you are monitoring with Yet another cloudwatch exporter, if a resource exists in a service other than EC2 on AWS and you describe the metric to monitor that resource in the metric definition file, create this IM management node for each metric for each instance. JP1 events corresponding to alerts monitoring services other than EC2 are registered to this IM management node. |
Display trend information, display properties of related nodes |
|
AzureMonitorSID of a non-Azure VM monitoring target |
SID of a monitoring target that is not an Azure VM. When Promitor is used for monitoring, this IM management node is created if an instance other than the Azure VM monitoring target exists and the metrics to monitor the resources are defined in the metric definition file. JP1 events that correspond to alerts for monitoring resource instances are associated with this IM management node. |
||
|
Kubernetes SID |
SID of a monitoring target of Kubernetes. When Prometheus is working together, this IM management node is created if resources exist in the monitoring target and the metrics to monitor the resources are defined in the metric definition file. |
||
(b) SID Creation Condition for JP1/IM agent base
Obtain JP1/IM agent base configuration from Intelligent Integrated Management Base and create the following SID:
-
Integrated agent host SID
-
JP1/IM agent control base SID
If you use user-defined Prometheus or Fluentd to monitor a target and you do not have integrated agent installed, JP1/IM agent control base SID for that host is not created.
If JP1/IM - Manager is in a hierarchical configuration and IM Managed Node Creation Perform function is performed on the Integrated Manager, the configuration managed by lower manager is also retrieved. Therefore, Intelligent Integrated Management Base of lower manager must be accessible. For details about the prerequisites for JP1/IM - Manager in a hierarchical structure, see 3.5.8 Monitoring of multiple sites in Intelligent Integrated Management Base.
(c) Creation condition of IM management node other than JP1/IM agent base
JP1/IM - Agent creates an IM management node other than JP1/IM agent base from trend data stored in Trend data Management Database of JP1/IM - Manager. Therefore, you can start add-on program service. After waiting for about one minute#, you can create IM management node.
- #
-
If the value of Prometheus configuration file (jpc_prometheus_server.yml) scrape_interval has been changed, wait for the value and then operate.
When jddcreatetree command is executed, an IM management node is not created for an add-on program that has not been saved in Trend data Management Database, or user-defined Prometheus or Fluentd.
If either of the following is true, metric is not saved in Trend data Management Database.
-
With Add-on program, or user-defined Prometheus or Fluentd services are stopped or deleted, Trend data Management Database retention period (defaults to 32 days) has passed.
-
With Add-on program, or user-defined Prometheus or Fluentd services are stopped or deleted, metric is removed due to an Trend data Management Database upper limit#
- #
-
For details about deleting metric for Trend data Management Database, see 2.7.2(3) Deleting trend data.
Tagging is required on AWS when using Yet another cloudwatch exporter. For details, see 3.15.6(1)(k) Creating an IM Management Node for Yet another cloudwatch exporter.
Even if you use user-defined Prometheus or Fluentd, create an IM management node if trend data is stored in Trend data Management Database.
If JP1/IM - Manager is in a hierarchical configuration and the Integration Manager performs IM Managed Node Creation Perform function, IM Managed Node under lower manager is created from trend data stored in Trend data Management Database of lower manager. Therefore, Intelligent Integrated Management Base of lower manager must be accessible. In addition, metric must be stored in Trend data Management Database of lower manager.
-
For details about JP1/IM - Manager prerequisites for hierarchical structure, see (1) Prerequisites in 3.5.8 Monitoring of multiple sites.
-
For details about the various data managed by JP1/IM - Manager, see 3.6.1 Introduction in 3.6 Operation using WebGUI (integrated operation viewer).
(d) Creating and Operating IM management node for stopped Service
IM management node is created even if integrated agent host, add-on program, or user-defined Prometheus or Fluentd service is temporarily stopped for maintenance or other reasons. However, if the trend database expires or if metric is deleted because of an upper limit of size, IM management node is no longer created. If IM management node was unintentionally deleted, IM management node will be created again by creating and importing IM management node tree while the service is running and metric is saved in Trend data Management Database.
(e) Creating and Operating IM management node for Configuration Changes or Deletions
If you change or remove the configuration for add-on program, or user-defined Prometheus or Fluentd, IM management node is created prior to the configuration change or deletion unless their metric is removed from Trend data Management Database. For details about how to not create an IM management node before configuration change or deletion, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required) and 2.19.2(18) Creating and importing IM management node tree data (for Linux) (required) in the JP1/Integrated Management 3 - Manager Configuration Guide.
(f) Property display
Displays the properties of the JP1/IM - Agent IM management node in the node details area of the JP1/IM - Manager Integrated Operation Viewer.
The values that the JP1/IM - Agent sets as properties of the IM management node are shown below. Note that properties are not set for SIDs not listed here.
|
Configuration information SID type |
Property Name#1 |
|||||
|---|---|---|---|---|---|---|
|
Prometheus hostname |
Scrape Job Name |
Exporter name |
Prometheus trend name |
Module#2 |
||
|
Member name |
jp1_pc_prome_hostname |
job |
jp1_pc_exporter |
jp1_pc_trendname |
module |
|
|
Host |
EC2 Host SID |
-- |
-- |
-- |
-- |
-- |
|
Azure VM host SID |
-- |
-- |
-- |
-- |
-- |
|
|
Management Applications Categories |
PrometheusSID |
"Prometheus hostname" |
-- |
-- |
-- |
-- |
|
AlertmanagerSID |
-- |
-- |
-- |
-- |
||
|
Agent Service SID |
"Scrape Job Name" |
"jp1_pc_exporter value" |
-- |
-- |
||
|
Remote Monitoring Service SID |
-- |
-- |
-- |
|||
|
JP1/IM agent control base SID |
-- |
-- |
-- |
-- |
-- |
|
|
Integrated agent Categories |
Agent Service SID |
"Prometheus hostname" |
"Scrape Job Name" |
"jp1_pc_exporter value" |
"jp1_pc_trendname value" |
-- |
|
Remote Monitoring Service SID |
"jp1_pc_module Values" |
|||||
|
SID of logging targets |
-- |
-- |
-- |
-- |
||
|
Platforms Categories |
CloudWatchSID for EC2 |
"Prometheus hostname" |
"Scrape Job Name" |
"jp1_pc_exporter value" |
-- |
|
|
AzureMonitorSID of an Azure VM |
-- |
|||||
|
Object Root Node |
CloudWatchSID other than EC2 |
-- |
||||
|
AzureMonitorSID of a non-Azure VM monitoring target |
-- |
|||||
|
Kubernetes SID |
"jp1_pc_module-value" |
|||||
|
Configuration information SID type |
Property Name#1 |
|||||
|---|---|---|---|---|---|---|
|
Account |
Region |
AWS namespaces |
Connection Destination Integrated manager hostname |
Add-on program Name |
||
|
Member name |
account |
region |
cloud_srv |
manager_hostname |
jp1_pc_addon_program |
|
|
Host |
EC2 Host SID |
"Account String" |
"Region name" |
-- |
-- |
-- |
|
Azure VM host SID |
-- |
-- |
-- |
-- |
-- |
|
|
Management Applications Categories |
PrometheusSID |
-- |
-- |
-- |
-- |
-- |
|
AlertmanagerSID |
-- |
-- |
-- |
-- |
-- |
|
|
Agent Service SID |
-- |
-- |
-- |
-- |
-- |
|
|
Remote Monitoring Service SID |
-- |
-- |
-- |
-- |
-- |
|
|
JP1/IM agent control base SID |
-- |
-- |
-- |
"Destination Integrated manager host Name" |
-- |
|
|
Integrated agent Categories |
Agent Service SID |
-- |
-- |
-- |
-- |
-- |
|
Remote Monitoring Service SID |
-- |
-- |
-- |
-- |
-- |
|
|
SID of logging targets |
-- |
-- |
-- |
-- |
"jp1_pc_addon_program Values" |
|
|
Platforms Categories |
CloudWatchSID for EC2 |
"Account String" |
"Region name" |
"AWS namespaces name" |
-- |
-- |
|
AzureMonitorSID of an Azure VM |
-- |
-- |
-- |
-- |
-- |
|
|
Object Root Node |
CloudWatchSID other than EC2 |
"Account String" |
"Region name" |
"AWS namespaces name" |
-- |
-- |
|
AzureMonitorSID of a non-Azure VM monitoring target |
-- |
-- |
-- |
-- |
-- |
|
|
Kubernetes SID |
-- |
-- |
-- |
-- |
-- |
|
|
Configuration information SID type |
Property Name#1 |
||||
|---|---|---|---|---|---|
|
Tenant |
Subscription |
Resource group |
Azure service |
||
|
Member name |
tenant |
subscription |
resource_group |
cloud_srv |
|
|
Host |
EC2 Host SID |
-- |
-- |
-- |
-- |
|
Azure VM host SID |
"tenant-string" |
"subscription-string" |
"resource-group-name" |
-- |
|
|
Management Applications Categories |
PrometheusSID |
-- |
-- |
-- |
-- |
|
AlertmanagerSID |
-- |
-- |
-- |
-- |
|
|
Agent Service SID |
-- |
-- |
-- |
-- |
|
|
Remote Monitoring Service SID |
-- |
-- |
-- |
-- |
|
|
JP1/IM agent control base SID |
-- |
-- |
-- |
-- |
|
|
Integrated agent Categories |
Agent Service SID |
-- |
-- |
-- |
-- |
|
Remote Monitoring Service SID |
-- |
-- |
-- |
-- |
|
|
SID of logging targets |
-- |
-- |
-- |
-- |
|
|
Platforms Categories |
CloudWatchSID for EC2 |
-- |
-- |
-- |
-- |
|
AzureMonitorSID of an Azure VM |
"tenant-string" |
"subscription-string" |
"resource-group-name" |
"Azure-service-name" |
|
|
Object Root Node |
CloudWatchSID other than EC2 |
-- |
-- |
-- |
-- |
|
AzureMonitorSID of a non-Azure VM monitoring target |
"tenant-string" |
"subscription-string" |
"resource-group-name" |
"Azure-service-name" |
|
|
Kubernetes SID |
-- |
-- |
-- |
-- |
|
- (Legend)
-
--: Not applicable
- #1
-
If the value of each property exceeds 255 characters, the 256th and subsequent characters are truncated.
- #2
-
Displays only if JP1/IM - Agent is a IM managed node of the included Blackbox exporter.
- Prometheus hostname
-
For PrometheusSID or AlertmanagerSID, sets the host name where the Prometheus server is set up. Otherwise, set the host name of the Prometheus server scraping the exporter corresponding to the IM management node.
- Scrape Job Name
-
The scrape_configs of the Prometheus configuration file (jpc_prometheus_server.yml) contains the value set to job_name.
- jp1_pc_exporter value
-
In the scrape_configs of the Prometheus configuration file (jpc_prometheus_server.yml), the value set to jp1_pc_exporter is entered. If no value is set, Unknown Exporter is entered.
- jp1_pc_module value
-
scrape_configs of the Prometheus configuration file (jpc_prometheus_server.yml) contains the value set in the params module.
- Account String
-
Contains the value of the AWS account string corresponding to the account ID to be monitored specified in the AWS definition file (aws_settings.conf).
- Region name
-
Contains the value of the region label included in the Yet another cloudwatch exporter metric.
- AWS namespace name
-
Contains the value set in the cloud_srv member of the trend data metric corresponding to SID set in the metric definition file (metrics_ya_cloudwatch_exporter.conf) of Yet another cloudwatch exporter.
- jp1_pc_addon_program
-
The value set in jp1_pc_addon_program in Fluentd monitor definition file is entered. If no value is set, "Fluentd" is entered.
(g) Configuration information SID format
The format of the SID of the configuration information corresponding to the IM management node created by the JP1/IM - Agent is shown below.
If the URL-encoded string exceeds 255, the structured ID of the SID of the target configuration information is split. Set up to character 255 to the name of the structured ID, and set the remaining character string to the name of the lower-level structured ID.
|
Configuration information SID type |
SID format |
|
|---|---|---|
|
Host |
Integrated agent host SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-IMA_JP1/IM agent control base hostname/_HOST_JP1/IM agent control base host name |
|
Prometheus host SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_HOST_P host name |
|
|
Agent host SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_HOST_I host name |
|
|
Remote monitoring host SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_HOST_E host name |
|
|
EC2 host SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P Hostname/_JP1PC-AHOST_E host name/_HOST_jp1_pc_nodelabel |
|
|
Log Monitor Host SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-LOGTRAP_I host name/_HOST_I host name |
|
|
Azure VM host SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_HOST_jp1_pc_nodelabel |
|
|
Management Applications Categories |
JP1/IM agent control base SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-IMA_JP1/IM agent control base hostname/_HOST_JP1/IM agent control base/_IMAGENT_ |
|
PrometheusSID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_HOST_P host name/_PROMETHEUS_ |
|
|
AlertmanagerSID |
_JP1PC-IMB_JP1/IM agent management base/_JP1PC-M_P host name/_HOST_P host name/_ALERTMANAGER_ |
|
|
Agent Service SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_I host name/_HOST_I host name/_JP1PC-A_jp1_pc_nodelabel/_JP1PC-SERVICE_ |
|
|
Remote Monitoring Service SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_HOST_E host name/_JP1PC-RM_jp1_pc_remote_monitor_instance/_JP1PC-SERVICE_ |
|
|
Log Monitor Service SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-LOGTRAP_I host name/_HOST_I host name/_JP1PC-A_jp1_pc_nodelabel_fluentd/_JP1PC-SERVICE_ |
|
|
Integrated agent Categories |
Agent SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_I host name/_HOST_I host name/_JP1PC-A_jp1_pc_nodelabel |
|
Remote Agent SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_JP1PC-RM_jp1_pc_remote_monitor_instance/_HOST_I host name/_JP1PC-A_jp1_pc_nodelabel |
|
|
SID of logging targets |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-LOGTRAP_I host name/_HOST_I host name/_JP1PC-A_jp1_pc_nodelabel |
|
|
Platform Categories |
CloudWatchSID for EC2 |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_JP1PC-RM_jp1_pc_remote_monitor_instance/_HOST_jp1_pc_nodelabel/_JP1PC-A_Yet%20another%20cloudwatch%20exporter |
|
AzureMonitorSID of an Azure VM |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_JP1PC-RM_jp1_pc_remote_monitor_instance/_HOST_jp1_pc_nodelabel/_JP1PC-A_Promitor |
|
|
Object Root Node |
CloudWatchSID other than EC2 |
_JP1PC-IMB_JP1/IM agent management base/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_JP1PC-RM_jp1_pc_remote_monitor_instance/_JP1PC-AWS name space name_jp1_pc_nodelabel |
|
AzureMonitorSID of a non-Azure VM monitoring target |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-AHOST_E host name/_JP1PC-RM_jp1_pc_remote_monitor_instance/_JP1PC-Azure service name_jp1_pc_nodelabel |
|
|
Kubernetes SID |
_JP1PC-IMB_JP1/IM agent management base hostname/_JP1PC-M_P host name/_JP1PC-component name_jp1_pc_nodelabel |
|
- P host name
-
URL encoded host name of the host where Prometheus server is set up.
- E host name
-
A URL-encoded string of the host name of the host where the exporter is set up.
- I host name
-
This is a URL-encoded string of the host name of the host to be monitored.
When an IP address is specified for targets in the discovery configuration file (file_sd_config_blackbox_icmp.yml) of Blackbox exporter (ICMP monitoring), the I-host name of the remote monitoring host SID and the remote agent SID are IP addresses.
For Fluentd, this is the value specified in "instance" of the monitor definition file.
- jp1_pc_nodelabel
-
This is a URL encoded string of the value set in the jp1_pc_nodelabel in the scrape_configs of the Prometheus configuration file (jpc_prometheus_server.yml).
In the case of EC2 host SID, CloudWatchSID of EC2, CloudWatchSID other than EC2, this is a URL encoded string of the value set in the jp1_pc_nodelabel of the AWS resource.
For Fluentd, this is URL encoded value set in "jp1_pc_nodelabel" of the monitor definition file.
- jp1_pc_remote_monitor_instance
-
This is a URL encoded string of the string after the value ":" (colon) set in the jp1_pc_remote_monitor_instance in the discovery configuration file. If the value of the jp1_pc_remote_monitor_instance is changed in the relabel_configs, it is a URL encoded string of the changed value.
- jp1_pc_nodelabel_fluentd
-
This is a URL encoded value set in "jp1_pc_nodelabel_fluentd" of Fluentd monitor definition file.
- AWS name space name
-
This is the value specified for the cloud_srv member in the Yet another cloudwatch exporter metrics definition file (metrics_ya_cloudwatch_exporter.conf). Change the forward slash (/) to a hyphen (-).
- Azure service name
-
A value specified for the cloud_srv member in the metric definition file for monitoring Azure.
- component name
-
A value specified for module member in the metric definition file for container monitoring.
The following table shows the meanings of the character strings of SID of the configuration information. (The leading "JP1PC" is fixed character string in JP1/IM - Agent)
|
String SID |
Meaning |
|---|---|
|
_JP1PC-IMB_ |
Hosts with JP1/IM agent management base functions |
|
_JP1PC-IMA_ |
Hosts with JP1/IM agent control base functions |
|
_JP1PC-M_ |
Hosts with manager capabilities |
|
_JP1PC-LOGTRAP_ |
Host with log monitoring capabilities |
|
_JP1PC-AHOST_ |
Hosts with agent functions |
|
_JP1PC-A_ |
Agent |
|
_JP1PC-RM_ |
Remote agent |
(h) IM management node per integrated agent
IM management node that you create depends on the type of Exporter that integrated agent ships with. Whether or not IM management node can be created for each Exporter type is shown below.
|
Type of configuration SID |
Whether or not IM management node can be created for each JP1/IM agent base or add-on program that is set up |
||||||
|---|---|---|---|---|---|---|---|
|
JP1/IM agent control base |
Windows exporter, Windows exporter (Hyper-V monitoring), Node exporter, Node exporter for AIX |
Blackbox exporter, Web exporter, VMware exporter, SQL exporter |
Yet another cloudwatch exporter |
User-defined Exporter |
Fluentd |
||
|
Host |
Integrated agent host SID |
Y |
-- |
-- |
-- |
-- |
-- |
|
Prometheus host SID |
-- |
Y |
Y |
Y |
Y |
-- |
|
|
Agent host SID |
-- |
Y |
Y |
-- |
Y#2 |
-- |
|
|
Remote monitoring host SID |
-- |
-- |
Y |
Y |
Y#2 |
-- |
|
|
EC2 host SID |
-- |
-- |
-- |
Y |
-- |
-- |
|
|
Log Monitor Host SID |
-- |
-- |
-- |
-- |
-- |
Y |
|
|
Azure VM host SID |
-- |
-- |
-- |
-- |
-- |
-- |
|
|
Management Applications Categories |
JP1/IM agent control base SID |
Y |
-- |
-- |
-- |
-- |
-- |
|
PrometheusSID |
-- |
Y |
Y |
Y |
Y |
-- |
|
|
AlertmanagerSID |
-- |
Y |
Y |
Y |
Y |
-- |
|
|
Agent Service SID |
-- |
Y |
-- |
-- |
Y#1 |
-- |
|
|
Remote Monitoring Service SID |
-- |
-- |
Y |
Y |
Y#1 |
-- |
|
|
Log Monitor Service SID |
-- |
-- |
-- |
-- |
-- |
Y |
|
|
Integrated agent Categories |
Agent SID |
-- |
Y |
-- |
-- |
Y#2 |
-- |
|
Remote Agent SID |
-- |
-- |
Y |
-- |
Y#2 |
-- |
|
|
SID of logging targets |
-- |
-- |
-- |
-- |
-- |
Y |
|
|
Platform Categories |
CloudWatchSID for EC2 |
-- |
-- |
-- |
Y |
-- |
-- |
|
AzureMonitorSID of an Azure VM |
-- |
-- |
-- |
-- |
-- |
-- |
|
|
Object Root Node |
CloudWatchSID other than EC2 |
-- |
-- |
-- |
Y |
-- |
-- |
|
AzureMonitorSID of a non-Azure VM monitoring target |
-- |
-- |
-- |
-- |
-- |
-- |
|
|
Kubernetes SID |
-- |
-- |
-- |
-- |
-- |
-- |
|
|
Type of configuration SID |
Whether or not IM management node can be created for each JP1/IM agent base or add-on program that is set up |
|||||||
|---|---|---|---|---|---|---|---|---|
|
User-defined Exporter#1 |
User-defined Fluentd#1 |
Azure |
Process |
UAP |
Container |
Log metrics (compliant with user-defined Exporter) |
||
|
Host |
Integrated agent host SID |
-- |
-- |
-- |
-- |
-- |
-- |
-- |
|
Prometheus host SID |
Y |
-- |
Y |
Y |
Y |
Y |
Y |
|
|
Agent host SID |
Y#2 |
-- |
-- |
Y |
Y |
Y |
Y#2 |
|
|
Remote monitoring host SID |
Y#2 |
-- |
Y- |
-- |
-- |
Y |
Y#2 |
|
|
EC2 host SID |
-- |
-- |
-- |
-- |
-- |
-- |
-- |
|
|
Log Monitor Host SID |
-- |
Y |
-- |
-- |
-- |
-- |
-- |
|
|
Azure VM host SID |
-- |
-- |
Y |
-- |
-- |
-- |
-- |
|
|
Management Applications Categories |
JP1/IM agent control base SID |
-- |
-- |
-- |
-- |
-- |
-- |
-- |
|
PrometheusSID |
Y |
-- |
Y |
Y |
Y |
Y |
Y |
|
|
AlertmanagerSID |
Y |
-- |
Y |
Y |
Y |
Y |
Y |
|
|
Agent Service SID |
Y#1 |
-- |
Y |
Y |
Y |
Y |
Y#2 |
|
|
Remote Monitoring Service SID |
Y#1 |
-- |
-- |
-- |
-- |
Y |
Y#2 |
|
|
Log Monitor Service SID |
-- |
Y |
-- |
-- |
-- |
-- |
-- |
|
|
Integrated agent Categories |
Agent SID |
Y#2 |
-- |
-- |
Y |
Y |
Y |
Y#2 |
|
Remote Agent SID |
Y#2 |
-- |
-- |
-- |
-- |
Y |
Y#2 |
|
|
SID of logging targets |
-- |
Y |
-- |
-- |
-- |
-- |
-- |
|
|
Platform Categories |
CloudWatchSID for EC2 |
-- |
-- |
-- |
-- |
-- |
-- |
-- |
|
AzureMonitorSID of an Azure VM |
-- |
-- |
Y |
-- |
-- |
-- |
-- |
|
|
Object Root Node |
CloudWatchSID other than EC2 |
-- |
-- |
-- |
-- |
-- |
-- |
-- |
|
AzureMonitorSID of a non-Azure VM monitoring target |
-- |
-- |
Y |
-- |
-- |
-- |
-- |
|
|
Kubernetes SID |
-- |
-- |
-- |
-- |
-- |
Y |
-- |
|
- (Legend)
-
Y: Create an IM Management Node
--: Do not create an IM admin node
- #1
-
This is applicable when connecting directly to JP1/IM - Manager (Intelligent Integrated Management Base).
- #2
-
Depending on the contents specified in the Prometheus configuration file (jpc_prometheus_server.yml), it may or may not be created.
(i) Tree Format
The contents displayed in the JP1/IM - Manager Integrated Operation Viewer tree depend on the type of exporter used for monitoring.
In the JP1/IM - Manager's Integrated Operation Viewer, the IM management nodes created by JP1/IM - Agent are displayed in the tree shown below.
All Systems
+ BizSystem#1
| + Integration Manager host name
| | + Management Applications#2
| | + Metric forwarder(Prometheus server)#3
| | + Alert forwarder(Alertmanager)#4
| | + AWS metric collector(Yet another cloudwatch exporter)#6
| | + Synthetic metric collector(Blackbox exporter)#6
| | + Azure metric collector (Promitor)#6
| | + Synthetic web metric collector(Web exporter)#6
| | + VMware metric collector(VMware exporter)#32
| | + MSSQL metric collector(SQL exporter)#6
| + Monitored host name or container name
| | + Management Applications#2
| | | + JP1/IM agent control base#7
| | | + Metric forwarder(Prometheus server)#3
| | | + Alert forwarder(Alertmanager)#4
| | | + Linux metric collector(Node exporter)#5
| | | + Windows metric collector(Windows exporter)#5
| | | + Hyperv metric collector(Windows exporter hyperv)#5
| | | + Synthetic metric collector(Blackbox exporter)#6
| | | + Linux process metric collector(Process exporter)#5
| | | + Script metric collector(Script exporter)#5
| | | + Synthetic web metric collector(Web exporter)#6
| | | + VMware metric collector(VMware exporter)#6
| | | + jp1_pc_nodelabel value#8
| | | + Log trapper(Fluentd)#9
| | + Platform#2
| | | + Linux metric collector(Node exporter)#10
| | | + Windows metric collector(Windows exporter)#10
| | | + jp1_pc_nodelabel value#10
| | + Service#2
| | | + Service name #24
| | | + Unit filename #25
| | + Enterprise#2
| | | + SAP Syslog extractor(jr3lget)#26
| | | + SAP CCMS Alert extractor(jr3alget)#26
| | + Categories specified in the jp1_pc_category#11
| | | + jp1_pc_nodelabel value#12
| | + Categories specified in the jp1_pc_category#2
| | + jp1_pc_nodelabel value#13
| | + Virtual Machine Host#2
| | + Hyperv metric collector(Hypervisor)#5
| + Monitored host name or IP address#14
| | + Platform#2
| | + Synthetic metric collector(Blackbox exporter(ICMP))#15
| + Host name of the host to monitor with HTTP/HTTPS#16
| | + ServiceResponse#2
| | + jp1_pc_nodelabel value#15
| + Hostname of the host to monitor in the Web scenario#16
| | + Service Response#2
| | + jp1_pc_nodelabel value#15
| + Hostname of the Hypervisor#16
| | + Virtual Machine#2
| | + VMware metric collector(VMware exporter)#15
| + Monitored VM Name#16
| | + Virtual Machine#2
| | + VMware metric collector(VMware exporter)#15
| + Virtual Machine Name#16
| | + Virtual Machine#2
| | + Hyperv metric collector(VM) Hypervisor:Hypervisor hostname#33
| + Hostname for SQL Server#16
| | + Database#2
| | + Instance for SQL Server#15
| + Host name of the host on AWS#17
| | + Platform#2
| | + AWS metric collector(Yet another cloudwatch exporter)#18
| + Monitored host name#27
| | + Management Applications#2
| | | + AIX metric collector(Node exporter for AIX)#5
| | + Platform#2
| | + AIX metric collector(Node exporter for AIX)#10
| + Host name of the host on Azure#20
| | + Platform#2
| | + Promitor#21
| + SAP system instance name#28 #29
| + Enterprise#2 #28
| + SAP Syslog#28
| + SAP CCMS Alert#28
+ JP1/IM - Agent host
| + Management Applications
| + OracleDB metric collector(OracleDB exporter)#30
+ Oracle Database host
| + Database
| + Monitored target (any name) #31
| | :
| + Monitored target (any name)
+ Amazon Simple Storage Service
| + Storage Name#19
| ...
+ Kubernetes
| + Clusters
| | + cluster-name#22
| + Namespaces
| | + Namespace-name#22
| + DaemonSets
| | + DaemonSet-name#22
| + StatefulSets
| | + StatefulSet-name#22
| + ReplicaSets
| | + ReplicaSets-name#22
| + CronJobs
| | + CronJob-name#22
| + Pods
| | + Pod-name#22
| + Nodes
| + node-name#22
+ Azure Kubernetes Service
| + cluster-name#23
| ...
+ system-name
+ resource-name- #1
-
Indicates the system node. Created when the user has defined a system node.
- #2
-
Indicates a category.
- #3
-
Indicates the PrometheusSID.
- #4
-
Indicates the AlertmanagerSID.
- #5
-
Indicates the agent service SID.
- #6
-
Indicates the Remote Monitoring Service SID.
- #7
-
Indicates a JP1/IM agent control base SID.
- #8
-
Indicates the Agent Service SID (user-defined Exporter).
- #9
-
Indicates the log-monitoring-service SID.
- #10
-
Indicates the agent SID.
- #11
-
Indicates the category (user-defined Exporter, Script exporter, or log metrics).
- #12
-
Indicates the agent SID (user-defined Exporter, Script exporter, or log metrics).
- #13
-
Indicates SID of the logging target.
- #14
-
In the targets of the discovery configuration file (file_sd_config_blackbox_icmp.yml) of Blackbox exporter (ICMP monitoring), the host name is displayed when a host name is specified, and the IP address is displayed when an IP address is specified.
- #15
-
Indicates the remote agent SID.
- #16
-
Indicates the agent host SID.
- #17
-
Indicates the EC2 host SID.
The label displays the string set in the AWS jp1_pc_nodelabel tag.
- #18
-
Indicates the CloudWatchSID for EC2.
- #19
-
Indicates a CloudWatch SID other than EC2.
For IM management nodes other than EC2, create an object root node with "JP1CS-AWS namespace name" as the object root node type.
The label displays the string set in the AWS jp1_pc_nodelabel tag.
- #20
-
Indicates an Azure VM host SID.
The label shows the string specified for jp1_pc_nodelabel of the metric.
- #21
-
Indicates AzureMonitorSID of an Azure VM.
- #22
-
Indicates a Kubernetes SID.
An object root node with its object root node type as JP1CS-component-name is created.
For a component other than a cluster, the label shows the string specified for jp1_pc_nodelabel of the metric.
For a cluster, an IM management node is displayed if jp1_pc_prome_clustername exists in the metric. The label shows the string specified for jp1_pc_prome_clustername.
Even for nodes that display data for two scrape targets: kube-state-metrics and kubelet, the nodes are displayed as one node without being separated.
- #23
-
Indicates AzureMonitorSID of a non-Azure VM monitoring target.
In an IM management node for a non-VM target, an object root node with its object root node type as JP1CS-Azure-service-name is created.
The label shows the string specified for jp1_pc_nodelabel of the metric.
- #24
-
Indicates the agent SID (if you are performing service monitoring in a Windows environment).
Set the label name (jp1_pc_nodelabel value) of the IM management node in Service Monitoring (Windows environment) to the value displayed in "Service Name:" when you open the service properties on the service screen of the Windows management tool. If half-width upper-case characters are included in the service name, convert them to half-width lower case characters and set them. When full-pitch uppercase characters are included, it is converted to full-pitch lowercase characters and set. The upper limit of the length of the jp1_pc_nodelabel value is 234 bytes for the URL-encoded string (the upper limit is 26 characters for all multibyte characters). If the limit is exceeded, the value of the jp1_pc_nodelabel must be changed in the metric_relabel_configs of the Prometheus configuration file (jpc_prometheus_server.yml). For details on how to set the settings, see 1.21.2(3)(g) Configure the settings when the label name (jp1_pc_nodelabel value) of the IM management node exceeds the upper limit (for Windows) (optional) in the JP1/Integrated Management 3 - Manager Configuration Guide.
- #25
-
Indicates the agent SID (if you are performing service monitoring in a Linux environment).
Set the unit file name (file name of the unit file registered in Systemd) to the label name (jp1_pc_nodelabel value) of the IM management node of Service Monitoring (Linux environment). The upper limit of the length of the jp1_pc_nodelabel value is 234 bytes for the URL-encoded string (the upper limit is 26 characters for all multibyte characters). If the limit is exceeded, the value of the jp1_pc_nodelabel must be changed in the metric_relabel_configs of the Prometheus configuration file (jpc_prometheus_server.yml). For details on how to set the settings, see 2.19.2(3)(g) Set when the IM management node label name (jp1_pc_nodelabel value) exceeds the upper limit (for Linux) (optional) in the JP1/Integrated Management 3 - Manager Configuration Guide.
- #26
-
This node is created by the metric output function (Script exporter) of SAP system monitoring.
- #27
-
Indicates the host name of AIX.
- #28
-
This node is created by the metric output function (Fluentd) of SAP system monitoring.
- #29
-
Indicates the instance name of the SAP system to be monitored by the log monitoring function (Fluentd) of SAP system monitoring. Displays the hostname set in item "instance" of the monitoring definition file in the text log file.
- #30
-
Only one process is displayed even if multiple processes are started.
- #31
-
For CDB configurations, the default tree view displays the root containers and PDB in out-of-order, in parallel.
- #32
-
Indicates VMware monitoring service SID.
- #33
-
Indicates SID of VM.
For agent SID, remote agent SID, and log-monitored SID, create a tree SID for each monitored target. When monitoring targets by different add-on programs, this item is displayed in a single tree SID. The tree SID is the same ID. For example, if you use Blackbox exporter to monitor Web server's outline, and then use Fluentd to monitor the log of that Web server, the same tree SID will occur.
If all of the following values are the same, it is judged as the same monitoring target.
-
Host name of the monitored host (name of _HOST_ in SID of the configuration)
-
Label-name of IM management node
-
IM management node Category ID
- - About IM management node Label-Names and Category ID
-
Label-name and category ID are set by the user when the monitoring method is as follows: For label-name and category ID for other monitoring methods, see the tree view above.
-
For HTTP/HTTPS monitoring of Blackbox exporter
Set Label-name to targets of the discovery configuration file and set the category ID to the jp1_pc_category.
-
For user-defined Exporter
In Prometheus configuration file (jpc_prometheus_server.yml)'s scrape defnition, set the label name to jp1_pc_nodelabel and category ID to jp1_pc_category of user-specific discovery configuration file.
-
For Fluentd logging monitoring
In the [Metric Settings] section of monitoring definition file of text file or Windows event-log monitoring definition file, set the label name to jp1_pc_nodelabel and set the category ID to jp1_pc_category.
-
If the same tree SID exists for JP1/IM - Manager functions such as event list and trend information view, the newly created tree SID grant information is handled as shown in the following table.
|
Grant Information |
Action |
|---|---|
|
target |
Add |
|
label |
Prefer label of the created tree SID |
|
resourceGroup |
Prefer resourceGroup of the created tree SID |
(j) Viewing IM management node in Containers
When you install integrated agent, or user-defined Prometheus or Fluentd in a container, integrated operation viewer displays the container name as a hostname in a tree.
(k) Creating an IM Management Node for Yet another cloudwatch exporter
Among the metrics set in Yet another cloudwatch exporter's metric definition file (metrics_ya_cloudwatch_exporter.conf), for resources monitored by Yet another cloudwatch exporter and for which the jp1_pc_nodelabel tag is set on AWS, Create an IM management node.
The following AWS namespaces are supported by the JP1/IM - Agent's Yet another cloudwatch exporter for monitoring.
|
AWS namespace name |
Metric classification name on CloudWatch# |
Dimension |
|---|---|---|
|
AWS/DynamoDB |
Table metrics |
TableName |
|
AWS/EC2 |
Metrics by instance |
InstanceId |
|
AWS/Lambda |
Metrics by function name |
FunctionName |
|
AWS/Lambda |
Metrics by resource |
FunctionName Resource |
|
AWS/S3 |
Storage Metrics |
BucketName StorageType |
|
AWS/S3 |
Request metrics for each filter |
BucketName FilterId |
|
AWS/SQS |
Queue metrics |
QueueName |
|
AWS/States |
Execution metrics |
StateMachineArn |
|
AWS/EBS |
Per-volume metrics |
VolumeId |
|
AWS/ECS |
ClusterName, ServiceName |
ClusterName ServiceName |
|
AWS/EFS |
File system metrics |
FileSystemId |
|
AWS/EFS |
File system storage metrics |
FilesSystemId StorageClass |
|
AWS/FSx |
File system metrics |
FileSystemId |
|
AWS/RDS |
Per-database metrics |
DBInstanceIdentifier |
|
DBClusterIdentifier |
DBClusterIdentifier |
|
|
AWS/SNS |
Topic metrics |
TopicName |
- #
-
The categorization name that AWS CloudWatch categorizes metrics by dimension. You can check it on the CloudWatch web page.
There are two ways to specify the monitoring target in Yet another cloudwatch exporter, Auto-discovery configuration and static configuration, but in JP1/IM - Agent, only for the resources specified in the Auto-discovery configuration, You can create an IM management node. Do not create an IM management node for the entire resource or grouped metrics that can be viewed on CloudWatch.
If you delete the resource to be created for the IM management node or the jp1_pc_nodelabel tag set on the resource on AWS, the IM management node will not be created.
Also, you need to wait about 10 minutes after you finish editing the tag until the tag information is reflected in CloudWatch.
For details on setting Auto-discovery configuration in Yet another cloudwatch exporter configuration file (jpc_ya_cloudwatch_exporter.yml), see Yet another cloudwatch exporter configuration file (jpc_ya_cloudwatch_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
If you want to monitor using Yet another cloudwatch exporter, make sure that CloudWatch displays metrics for the resources you want to monitor, start the Prometheeus server and Yet another cloudwatch exporter services, and then use the Prometheus configuration file (jpc_ After the time set in item "scrape_interval" in prometheus_server.yml) (default: 1 minute) has elapsed, execute the jddcreatetree command and the jddupdatetree command (specify the configuration change mode (-c option)). After running the command, check for excess or deficiency of the IM management node of Yet another cloudwatch exporter displayed in the Integrated Operations Viewer tree.
For details on what to do if there is an excess or deficiency of the IM management node displayed, see 12.5.3(73) IM management node for Yet another cloudwatch exporter Is Not Created in the JP1/Integrated Management 3 - Manager Administration Guide.
(l) About Configuring Multiple Scrape Definitions to Monitor the Same Monitored Object
Blackbox exporter and Yet another cloudwatch exporter are Exporter that remotely monitor hosts and resources that are not install destination. It does not support the configuration to monitor the same targets in several scrape definitions of the same Exporter, or to monitor the same targets by setting up the same Exporter to be monitored remotely on different hosts.
(m) IM management node and metric Labeling
As shown in 3.15.6(1)(c) Creation condition of IM management node other than JP1/IM agent base, create a IM management node from the trend data saved in Trend data Management Database. Trend data contains metric information. Each metric is labeled.
Labeling for metric is set in the following definition-file:
-
Discovery configuration file
Set the same labeling for each Exporter. For details, see the discovery configuration file of each Exporter in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Commandm Definition File and API Reference.
-
Prometheus configuration file
Configure labels that cannot be set in the discovery configuration file and labels that must be changed for each metric. For details, see Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
(n) Setting of JP1 resource groups
Regarding IM management node created by JP1/IM - Agent, setting of JP1 resource groups is not conducted. In order to control view and operation using JP1 resource groups, define the system whose JP1 resource groups is set in the system node definition file (imdd_systemnode.conf), and allocate JP1/IM - Agent host.
For details of system node definition file (imdd_systemnode.conf), refer to the JP1 Version 13 JP1/Integrated Management 3 - Manager Command, Definition File and API Reference, System node definition file (imdd_systemnode.conf) in Chapter 2. Definition Files.
(2) Creating an association
In integrated operation viewer, create related information for the related line displayed on the Related node tabbed page.
Create the following related information:
|
Related Type |
from |
to |
|---|---|---|
|
managerAgent |
SID# indicating JP1/IM - Manager |
JP1/IM agent control base SID |
|
hypervisorVM |
SID indicating the hypervisor |
SID indicating VM |
- #
-
If JP1/IM - Manager is in a hierarchical configuration, it is JP1/IM - Manager that directly manages JP1/IM agent control base.
Plug-ins for monitoring containers through user-defined Prometheus implement this method and generate link information from samples stored in the trend data management database. The following table lists the link information to be generated.
|
Related Type |
from |
to |
||
|---|---|---|---|---|
|
Type of SID |
Object root node type |
Type of SID |
Object root node type |
|
|
containerCluster |
Kubernetes SID# |
JPC-KUBERNETES-CLUSTER |
Kubernetes SID# |
JPC-KUBERNETES-NODE |
|
JPC-KUBERNETES-CLUSTER |
JPC-KUBERNETES-NAMESPACE |
|||
|
JPC-KUBERNETES-NAMESPACE |
JPC-KUBERNETES-DEPLOYMENT |
|||
|
JPC-KUBERNETES-NAMESPACE |
JPC-KUBERNETES-REPLICASET |
|||
|
JPC-KUBERNETES-NAMESPACE |
JPC-KUBERNETES-STATEFULSET |
|||
|
JPC-KUBERNETES-NAMESPACE |
JPC-KUBERNETES-DAEMONSET |
|||
|
JPC-KUBERNETES-NAMESPACE |
JPC-KUBERNETES-CRONJOB |
|||
|
JPC-KUBERNETES-NAMESPACE |
JPC-KUBERNETES-POD |
|||
|
JPC-KUBERNETES-DEPLOYMENT |
JPC-KUBERNETES-REPLICASET |
|||
|
JPC-KUBERNETES-REPLICASET |
JPC-KUBERNETES-POD |
|||
|
JPC-KUBERNETES-STATEFULSET |
JPC-KUBERNETES-POD |
|||
|
JPC-KUBERNETES-DAEMONSET |
JPC-KUBERNETES-POD |
|||
|
sameNode |
JPC-KUBERNETES-NODE |
Agent SID# |
HOST |
|
(3) Return metric List
Performance data collected by Exporter is stored as trend data in JP1/IM - Manager's Trend data Management Database by integrated agent feature.
Trend data metrics can be viewed or retrieved in integrated operation viewer or metric List Retrieval API#1 of JP1/IM - Manager.
IM management node that corresponds to SID of the configuration information that can be used by trend information viewing function described in 3.15.6(1)(a) Type of SID of the target configuration and its available functions can return Metric list.
The list of metric to return depends on the type of Exporter as follows:
- - For IM management node on a Blackbox exporter
-
Returns a list of metric whose module member values set to Blackbox exporter metric definition file (metrics_blackbox_exporter.conf) begin with Module value of IM management node. You can find Module value of IM management node in integrated operation viewer properties.
- - For Yet another cloudwatch exporter
-
Returns a list of metric matching the value of cloud_srv member set in Yet another cloudwatch exporter metric definition file (metrics_ya_cloudwatch_exporter.conf) and AWS namespace name named in IM management node. You can find AWS namespace name of IM management node in the properties of integrated operation viewer.
- - For Azure Monitor performance data collection
-
This returns the value of cloud_srv specified in the Promitor metric definition file (metrics_promitor.conf) and a list of metrics whose name matches the Azure service name# in the IM management node.
- - For Container monitoring through user-defined Prometheus
-
This returns the value of component specified in the container monitoring metric definition file (metrics_kubernetes.conf) as well as a list of metrics whose name matches the component name# in the IM management node.
- #
-
It can be viewed as a property in the integrated operation viewer of JP1/IM - Manager.
- - For Exporter other than the above
-
Returns the list of metric set in metric definition file (metrics_Prometheus Trend Name #.conf).
- #
-
The value set in jp1_pc_trendname in "scrape_configs" of Prometheus configuration file (jpc_prometheus_server.yml) is entered.
For details about the content of settings and initial settings of metric definition file, see metric definition file of each Exporter in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
If JP1/IM - Manager is in a hierarchical configuration and the Integration Manager runs trend data display function, IM managed node under lower manager must also be set in the Integration Manager metric definition file.
(4) Return of trend data
Product plugin in JP1/IM - Agent returns trend data corresponding to IM managed-node and metric at the following timing:
-
When trend information of IM managed node of JP1/IM - Agent is displayed on the Trends tab #1 of the Integrated Operation Viewer window
-
When JP1/IM - Manager's time-series data retrieval API#2 is executed
- #1
-
For details of the Trends tab on the Integrated Operation Viewer window, see 2.6.1(6) Trends tab in the JP1/Integrated Management 3 - Manager GUI Reference.
- #2
-
For details of the Time Series Data Acquisition API, see 5.11.2 Time-series data acquisition in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
The product plug-in for Azure Monitor performance data collection excludes the following labels from the returned instance names, which is a specification only this plug-in has.
-
resource_uri#1
-
subscription_id#1
-
resource_group#1
-
instance_name#1
-
tenant_id#1
-
jp1_pc_prome_clustername#2
- #1
-
This label is used by Promitor.
- #2
-
This label is used in container monitoring.
IM management node that can return trend data is IM management node corresponding to the SID of configuration information that can use trend information view function described in 3.15.6(1)(a) Type of SID of the target configuration and its available functions.
The prerequisites for returning trend data and the instance name to be displayed are shown below.
(a) Prerequisites
JP1/IM - Agent creates and returns trend data from performance data stored in JP1/IM - Manager's Trend data Management Database. You cannot return trend data if you do not use integrated agent feature to store performance data in Trend data Management Database.
If JP1/IM - Manager is in a hierarchical configuration and the Integration Manager runs trend data display function, IM managed node under lower manager creates and returns trend data from performance data stored in Trend data Management Database of lower manager. Therefore, Intelligent Integrated Management Base of lower manager must be accessible. In addition, performance-data must be stored in Trend data Management Database of lower manager.
-
For details about JP1/IM - Manager prerequisites for hierarchical structure, see (1) Prerequisites in 3.5.8 Monitoring of multiple sites.
-
For details about the various data managed by JP1/IM - Manager, see 3.6.1 Introduction in 3.6 Operation using WebGUI (integrated operation viewer).
(b) Instance name (a string to display as a legend in the graph)
-
If metric is set to a label other than the following name and specified in drop_legend_labels#1 of metric definition file, convert the object with a member of that label name to JSON format, and return the string with the leading "{" and trailing "}" removed as instance name (e.g., "core":"0,0", "mode":"idle"). If a label other than the following name and specified in drop_legend_labels#1 of metric definition file is not set, instance name is not returned.
__name__, instance, job, jp1_pc_nodelabel, jp1_pc_prome_hostname, jp1_pc_exporter, jp1_pc_remote_monitor_instance, jp1_pc_category, jp1_pc_trendname, jp1_pc_module, jp1_pc_rm_agent_create_flag, jp1_pc_agent_create_flag, jp1_pc_multiple_node, jp1_pc_nodelabel_fluentd, jp1_pc_addon_program, account_id#2, region#2, name#2
- #1
-
For drop_legend_labels of metric definition file, see Node exporter metric definition file (metrics_node_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
- #2
-
This is only the case with Yet another cloudwatch exprorer.
-
If the instance name exceeds 249 characters, the 250th and subsequent characters are deleted.
-
If there are multiple strings that are the same up to the 249th character of the instance name, the term number "#n" (n=1, 2, 3...) granted.
(c) About Performance Data to Retrieve
Of the performance data stored in Trend data Management Database of JP1/IM - Manager, which performance data is returned as trend data is specified by PromQL expression set in promql of metric definition file.
For details about the default value of promql, see the description of The promql for metric definition File (including $jp1im_TrendData_labels) in the table that describes the settings contents (initial state) for each metric in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference, which describes metric definition files for each Exporter.
For notes on PromQL statementes, see Note on PromQL expression in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
The string "$jp1im_TrendData_labels" contained in the promql value of the metric definition file is replaced with a PromQL statement to narrow down the acquisition target when retrieving performance data. For example, by specifying a string containing "$jp1im_TrendData_labels" in the promql value as follows, only the performance data of the IM management node that returns trend data can be retrieved as trend data.
- - String to specify for promql
-
(windows_memory_available_bytes and $jp1im_TrendData_labels) / (1024*1024)
- - Replaced string for "$jp1im_TrendData_labels"
-
(windows_memory_available_bytes and {jp1_pc_prome_hostname="Prometheus hostname",job="Scrape Job Name",instance="instance label value"}) / (1024*1024)
Also, the replaced strings of "$jp1im_TrendData_labels" depends on IM management node. For details, see the description of Replacing $jp1im_TrendData_labels in Node exporter metric definition file (metrics_node_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
■ Consolidation display of trend data with dynamic range vectors
When specifying a range vector in promql of metric definition file, specify "$stepTime{minSeconds="minimum-seconds"}" or a fixed value for the range vector selector (the value specified in square brackets [ ]). If you specify anything else that cannot be interpreted as PromQL expression, a KAJY62002-E message is output. For minimum-seconds, specify a positive integer.
When "$stepTime{minSeconds="minimum-seconds"}" is specified for the range vector selector, the trend data is dynamically calculated and displayed for the range vector so that all data is included according to the displayed range of the trend data. These are the dashboard panels (Trends, Ranks, Numbers, Gauges) and the trend data you want to view in trend data display function. For details about dashboards, see 3.2.6 Consolidation display of trend data with dynamic range vectors.
When the trend data is acquired, "$stepTime{minSeconds="minimum-seconds"}" is replaced with the time calculated by the following formula.
Trend data acquisition range (seconds) / countPerInstance#
- #
-
When displayed in the Trends tab of the integrated operation viewer, 60 is assumed. For details about countPerInstance, see Parametersin 5.11.2 Time-series data acquisition in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
However, if the value obtained by the calculation expression is smaller than the value specified by minimum-seconds, "$stepTime{minSeconds="minimum-seconds"}" is replaced with the time specified by minimum-seconds.
When the value obtained by the calculation expression is a decimal, the decimal portion is rounded up to the nearest integer.
By specifying the same amount of time in minSeconds as Prometheus's scrape interval, you can ensure that there are datapoints in the range vector. If you use a range vector in a function that requires at least two pieces of data, such as the irate function, specify a value that is at least twice the scrape interval.
For example, in an environment where the scrape interval is 60 seconds, you can get the value with the range vector selector according to the range of trend data acquisition range by specifying a string in promql as follows.
- - Text specified in promql
100 - (avg by (instance,job,jp1_pc_nodelabel,jp1_pc_prome_hostname) (irate(windows_cpu_time_total{mode=\"idle\"}[$stepTime{minSeconds=\"120\"}]) and $jp1im_TrendData_labels) * 100)- - Character string after replacement at trend data acquisition
-
-
When the trend data acquisition range is 30 minutes and the countPerInstance is 60
100 - (avg by (instance,job,jp1_pc_nodelabel,jp1_pc_prome_hostname) (irate(windows_cpu_time_total{mode=\"idle\"}[120s]) and $jp1im_TrendData_labels) * 100)-
When the trend data acquisition range is 240 minutes and the countPerInstance is 60
100 - (avg by (instance,job,jp1_pc_nodelabel,jp1_pc_prome_hostname) (irate(windows_cpu_time_total{mode=\"idle\"}[240s]) and $jp1im_TrendData_labels) * 100) -
If "{minSeconds="minimum-seconds"}" is omitted and only "$stepTime" is specified, the value is not compared with the minimum number of seconds, but is replaced with the value obtained by the formula.
When a fixed value is specified for the range vector selector in the metric definition file and when "$stepTime{minSeconds="minimum-seconds"} is specified, the following procedure describes how to check the graph of the metric that detected the anomaly in the alert.
- When a fixed value is specified for the range vector selector
-
Prerequisite settings
Set the Range Vector Selector value (underlined) to the same value as follows:
-
Expr for alert configuration file
100 < rate(windows_net_packets_sent_total[2m])
-
Metric definition file promql
rate(windows_net_packets_sent_total[2m]) and $jp1im_TrendData_labels
-
-
Working with integrated operation viewer
Display the graph by setting the alert firing date and time to the end date and time of the display range of the graph, and check the data plotted at the end date and time of the display range of the graph.
- When "$stepTime{minSeconds="minimum-seconds"}" is specified for the range vector selector
-
Prerequisite settings
Set the value (underlined) of the range vector selector as follows:
-
Alert configuration file expr
100 < rate(windows_net_packets_sent_total[2m])
-
Metric definition file promql
rate(windows_net_packets_sent_total[$stepTime{minSeconds=\"120\"}]) and $jp1im_TrendData_labels
-
-
Working with integrated operation viewer
1) Sets the display range of the graph and the number of plots so that the display interval of the graph is the same as the value of the range vector selector (2m) set to expor of alert configuration file. For example, if the number of plots is 60, the display range of the graph is 2 hours.
2) Display the graph by setting the alert firing date and time to the end date and time of the display range of the graph, and check the data plotted at the end date and time of the display range of the graph.
If JP1/IM - Manager is in a hierarchical configuration and the Integration Manager runs trend data display function, IM managed node under lower manager must also be set in the Integration Manager metric definition file.
(d) About the maximum number of data per instance
The upper limit of the number of data per instance is passed as an argument from JP1/IM - Manager. The returned data for each instance is returned at intervals of "number of hours of collection period ÷ upper limit of the number of data per instance". The number of data returned per instance does not exceed the argument limit, and the return value "countPerInstance" for this method is always "false".
(e) About performance data time
Trend data is returned by setting a time for each performance data, but this time is not the time when performance data was collected and stored, but the time ticked at equal intervals from the start time to the end time. The time at equal intervals is calculated based on "(Time Series Data Acquisition API End Time--Time Series Data Fetch API Start Time) ÷ Upper limit of the number of data per instance". When trend data is displayed on the Trends tab of the Integrated Operation Viewer, the upper limit of the number of data per instance is 60.
- - Example of trend data time displayed in the Trend tab of the integrated operation viewer
-
The following are the prerequisites:
-
Time when Prometheus server collected trend data
Time of collection of the first data: 2022/02/19 12:15:00
Time of collection of the second data: 2022/02/19 12:16:00
Time of collection of the third data: 2022/02/19 12:17:00
:
-
Time range for trend data displayed in the Trends tab of the integrated operation viewer
Start Time: 2022/02/19 12:15:30
End Time: 2022/02/19 13:15:30
When trend data is displayed on the Trend tab of the integrated operation viewer, the upper limit of the number of data per instance is 60, so from the assumption, the time from the start time to the end time of the display range (end time--start time) is 60 minutes, and the time at equal intervals ((end time--start time) ÷ upper limit of the number of data per instance) is 1 minute.
- As a result, the time of trend data displayed on the Trend tab of the integrated operation viewer (the time set for the trend data to be returned) is the time in 1-minute increments starting from the start time of the display range, as shown below.
-
Time of display of the first data: 2022/02/19 12:15:30
Time of display of the second data: 2022/02/19 12:16:30
Time of display of the third data: 2022/02/19 12:17:30
:
-
(f) Relationship between collected performance data and trend data
In the return trend data function, performance data collected in the past 5 minutes at a certain output interval is output as trend data for performance data collected at a certain collection interval during the specified collection period. If the performance data collected in the past 5 minutes is the same as the previous output, it will be supplemented with that data and trend data will be output. If there is no performance data collected in the last 5 minutes, the data will not be imputed.
Also, if performance data cannot be collected, such as when the exporter is stopped, trend data is not output.
If Prometheus scrape interval is longer than five minutes, the following actions are required to correctly output the trend data. If this action is not taken, some or all of the trend data may not be output. This is because only up to the last five minutes of data completion is performed at each trend data output interval. If the output interval is greater than 5 minutes, data from the uncompleted time zone is not used in calculations in the PromQL statement and may not be displayed.
-
For the Trends tab of the integrated operations viewer
Specify that the data display period is 300 minutes or less.
-
For time-series data retrieval API
Specify that the starting time and ending time are within the range of "countPerInstance x 5 minutes".
(g) Things to note when the trend data output period includes a future period
If the trend data output period includes a future period, the trend data output time output to the future period from the usage time (current time) of the trend data return function will be the future time. If you want to prevent the trend data output time from being in the future, do not include a future period in the trend data output period.
(h) Relationship between alerts and trend data
Since the alert rule is evaluated when Prometheus collects the performance data, the alert notification time is the collection time of the performance data that evaluated the alert. When matching trend data with alert notification time, the output time must match with the most recent trend data in the future direction.
(i) About Resetting Metric Values
There is a metric that restarting the OS resets the performance data. When trending such metrics, if the display period includes OS restarts, instantaneous jumping values (large values, small values, and minus values) may be displayed, or the graph may show extreme changes.
In such a case, check whether the OS has been restarted and ignore the data during the relevant time zone.
(5) Creating JP1 events in response to alerts
Product plugin on JP1/IM - Agent creates a JP1 event according to the alert content when Prometheus server evaluates the alert rule and the alert status is "firing" or "resolved".
The created JP1 event is registered in JP1/Base, the integration manager host, and can be viewed in the integrated operation viewer of JP1/IM - Manager and JP1/IM - View.
Create a JP1 event in the following cases, but this is notified by the "default format" of JP1 event translation API.
-
Fluentd
-
JP1/IM agent management base
-
JP1/IM agent control base
If an incorrect PromQL expression is set to expr of alert configuration file, JP1 event is not issued or the extended attribute may be blank. For notes on PromQL statementes, see Note on PromQL expression in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
(a) Attributes of JP1 events issued during performance data monitoring
The table below lists and describes the values specified for attributes of JP1 events issued if alerting conditions are met or no longer met while performance data is being monitored. Attributes are common with JP1/IM - Agent except for the table below. For details on the attributes of JP1/IM - Agent, see 3.2.3(1)Attributes of JP1 events that monitor and issue performance data in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. For details on the other attribute values, see the description of the API in 5.6.4 Event generation in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
|
Category |
Item name |
Attribute name |
Description |
|---|---|---|---|
|
Extended attribute (common information) |
Event source host name |
JP1_SOURCEHOST |
The value varies depending on whether the alert checked Promitor performance data.
|
|
Extended attribute (program-specific information) |
Azure service name |
JPC_AZURE_SERVICE |
Name of the Azure service This attribute is set only for Promitor performance data. The service name that corresponds to the metric name specified for the jp1_pc_metricname item in the alert configuration file (jpc_alerting_rules.yml) is found from the metric definition file and then specified. |
|
Azure tenant name |
JPC_AZURE_TENANT |
Azure tenant string This attribute is set only for Azure performance data. It is the string that corresponds to the Azure tenant ID specified in the property label definition file. default if there is no such definition. |
|
|
Azure subscription name |
JPC_AZURE_SUBSCRIPTION |
Azure subscription string This attribute is set only for Azure performance data. It is the string that corresponds to Azure's ID specified in the property label definition file. default if there is no such definition. |
|
|
Azure resource group name |
JPC_AZURE_RESOURCEGROUP |
Name of the Azure resource group This attribute is set only for Azure performance data. |
- Note
-
The total length of all the extended attribute values is limited to 10,000 bytes.
(6) Return the SID corresponding to the JP1 event
When JP1/IM - Agent's product plugin receives a JP1 event that meets all of the following criteria, it associates IM managed node with JP1 event.
-
Attribute "PRODUCT_NAME" starts with "/HITACHI/JP1/JPCCS"
-
Attribute "JPC_COMPONENT" has no value or matches "/HITACHI/JP1/JPCCS/CONFINFO"
JP1 events associated with JP1/IM - Agent IM management nodes are displayed in the event list of JP1/IM - Manager's Integrated Operations Viewer.
If either of the following conditions is satisfied, the association is not performed.
-
Attribute "PRODUCT_NAME" does not start with "/HITACHI/JP1/JPCCS"
-
Attribute "JPC_COMPONENT" is not equal to "/HITACHI/JP1/JPCCS/CONFINFO"
The following table lists SID types of IM management node that you want to associate with, and the criteria that you want to associate with. For details about SID types, see 3.15.6(1)(a) Type of SID of the target configuration and its available functions.
|
SID type |
Conditions for making associations |
|---|---|
|
JP1/IM agent control base SID |
Associate with JP1/IM agent control base SID if either of the following is true:
|
|
SID of logging targets |
If the property "PRODUCT_NAME" starts with "/HITACHI/JP1/JPCCS2/LOGTRAP/", it is associated with SID to be logged. |
|
Agent service SID (Fluentd (log metrics) |
Associates with an agent serviced SID (Fluentd (log metrics)) if the alert name is a JP1 event issued by an alert definition that starts with one of the following:
|
|
Log Monitor Service SID |
Associate with the Log Monitor service SID if the alert name is a JP1 event issued by an alert definition that starts with one of the following:
|
|
PrometheusSID |
Associate with PrometheusSID of Prometheus if either of the following is true:
|
|
AlertmanagerSID |
Associate with AlertmanagerSID if either of the following is true:
|
|
Remote Monitoring Service SID or Agent Service SID |
Associate if one of the following is true:
|
|
CloudWatchSID for EC2 |
In the case of the JP1 event shown below, the association with CloudWatchSID of EC2 is performed.
|
|
CloudWatchSID other than EC2 |
In the case of the JP1 event shown below, associate with a CloudWatchSID other than EC2.
|
|
Agent SID or Remote Agent SID |
In the case of a JP1 event that does not meet any of the above conditions, it is associated with the monitored agent SID or remote agent SID evaluated by the alert. |
|
AzureMonitorSID of an Azure VM |
A JP1 event that has been issued by monitoring performance data whose component name is the name of the plugin for the Azure Monitor performance collection feature and whose Azure service name is "Virtualmachine". |
|
AzureMonitorSID of a non-Azure VM monitoring target |
A JP1 event that has been issued by monitoring performance data whose component name is the name of the plugin for the Azure Monitor performance collection feature and whose Azure service name is neither "Virtualmachine" nor "KubernetesService". |
|
Kubernetes SID |
A JP1 event that has been published by monitoring performance data whose component name is the name of the plugin for the container monitoring feature by a user-defined Prometheus, or whose component name is the name of the plugin for the performance collection feature and whose Azure service name is "KubernetesService". |
If an incorrect PromQL expression is set to expr of alert configuration file, the extended property is blank and JP1 event might not be associated with the correct IM management node. For notes on PromQL statementes, see Note on PromQL expression in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.
The following JP1 events are tied to the manager node: If the manager is in a hierarchical configuration, it is only tied to the directly connected manager node.
|
Event ID |
Description |
|---|---|
|
00007630 |
JP1 Events to Issue When a agent Addition Is Detected |
|
00007631 |
JP1 Events to Issue When a agent deletion is Detected |
|
00007632 |
JP1 Events Issued when Updating agent Info is Detected |
(7) Supporting extended attributes of JP1 events
The JP1/IM - Agent plug-ins come with the extended event attributes definition files for integrated agents listed below. The attributes of JP1 events issued by JP1/IM - Agent are shown in the Detailed event information window of JP1/IM - Manager.
(a) File names
-
hitachi_jp1_pccs2_alert_attr_ja.conf (Japanese)
-
hitachi_jp1_pccs2_alert_attr_en.conf (English)
(b) Attributes
|
Attribute name |
Item |
|---|---|
|
E.JPC_AZURE_SERVICE |
Azure service name |
|
E.JPC_AZURE_TENANT |
Azure tenant name |
|
E.JPC_AZURE_SUBSCRIPTION |
Azure subscription name |
|
E.JPC_AZURE_RESOURCEGROUP |
Azure resource group name |
For details on the attributes of JP1 events issued by JP1/IM - Agent, see 3.15.6(5)(a) Attributes of JP1 events issued during performance data monitoring.