Hitachi

JP1 Version 13 JP1/Integrated Management 3 - Manager Configuration Guide


10.1.2 Setup

This section describes the procedure for setting up the add-on programs provided by IM Exporter.

For details on setting up JP1/IM - Agent, see 1.21 Setup for JP1/IM - Agent (for Windows) and 2.19 Setup for JP1/IM - Agent (for UNIX) of JP1/IM - Agent.

Organization of this subsection

(1) Defining log metrics for Fluentd (optional)

If you want to use the log metrics feature, configure the settings of Fluentd of JP1/IM - Agent in the procedure for enableling the add-on programs and then configure the following settings.

(a) Editing a log metrics definition file (defining log metrics)

Create a log metrics definition file (fluentd_any-name_logmetrics.conf) to define input and output plug-in features.

For details on sample log metrics definition files, see the section describing the applicable file under 12.5.2(2)(l) Log metrics in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

(b) Editing the log monitoring target definition file (adding include statements)

To enable logs to be monitored as defined in the log metrics definition file, add a row that starts with @include to the log monitoring target definition file (jpc_fluentd_common_list.conf), followed by the name of the log metrics definition file you edited in (a) Editing a log metrics definition file (defining log metrics).

For details on sample log monitoring target definition files, see the section describing the applicable file under 12.5.2(2)(l) Log metrics in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

(c) Restarting Fluentd

To apply the definitions specified in (a) Editing a log metrics definition file (defining log metrics) and (b) Editing the log monitoring target definition file (adding include statements), restart Fluentd.

For details on starting and stopping services before the restart, see 10. Starting and stopping JP1/IM - Agent in the manual JP1/Integrated Management 3 - Manager Administration Guide.

(2) Setting up scraping definitions (required)

Some types of Exporters require scraping definitions to be provided. If you want features provided by the IM Exporter add-on programs to be scraped, provide the scraping definitions listed in the following table.

Table 10‒1: Scraping definitions for the features provided by the IM Exporter add-on programs

Feature provided by the add-on programs

OS that the add-on runs on

Exporter or target

Scraping definition

Windows performance data collection

Windows

Windows exporter

Definitions are not required.

Linux process data collection

Linux

Process exporter

AWS CloudWatch performance data collection

Windows and Linux

Yet another cloudwatch exporter

Azure Monitor performance data collection

Promitor

Log metrics

Fluentd

The definition must be provided to use the log metrics feature.

For details on what definition is needed, see (a) Scraping definition for the log metrics feature.

UAP monitoring

Script exporter

A monitoring target script must be set up.

For details on what settings are needed, see (b) Scraping definition for Script exporter.

(a) Scraping definition for the log metrics feature

A user-specific Exporter scrapes logs based on the scraping definition of the log metrics feature.

  • Create a user-specific discovery configuration file

    Create a user-specific discovery configuration file (file_sd_config_any-name.yml) and define what should be monitored.

    For details on the user-specific discovery configuration file, see the section describing the applicable file in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

    For details on what should be defined for the log metrics feature, see the section describing sample files for the applicable file under 12.5.2(2)(l) Log metrics in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  • Set up scrape_configs in the Prometheus configuration file

    Add the scrape_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

    For details on the Prometheus configuration file, see the section describing the applicable file in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

    For details on what should be defined for the log metrics feature, see the section describing sample files for the applicable file under 12.5.2(2)(l) Log metrics in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

(b) Scraping definition for Script exporter

You can specify a scraping definition by using the http_sd_config method by which you run all scripts defined in the Script exporter configuration file (jpc_script_exporter.yml) or the file_sd_config method by which you specify one of the scripts defined in the Script exporter configuration file (jpc_script_exporter.yml) as a params element of scrape_configs in the Prometheus configuration file (jpc_prometheus_server.yml). The default is the http_sd_config method.

For details on the Script exporter configuration file and the Prometheus configuration file, see the section describing the applicable file in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

The following shows scraping definition examples.

  • Example of a scraping definition with the http_sd_config method

scrape_configs:
  - job_name: jpc_script_exporter
    http_sd_configs:
      - url: http://installation-host-name:port/discovery
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_script
      - target_label: jp1_pc_exporter
        replacement: JPC Script Exporter
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_trendname
        replacement: script_exporter
      - target_label: jp1_pc_multiple_node
        replacement: jp1_pc_exporter="{job='jpc_script.*',jp1_pc_multiple_node=''}"
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
      - target_label: jp1_pc_agent_create_flag
        replacement: false
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
installation-host-name

Specify the name of the host where Script exporter has been installed, with 1 to 255 characters other than control characters.

port

Specify the port number Script exporter is going to use.

any-category-name

Specify a category ID of the IM management node for the agent SID, with 1 to 255 characters other than control characters.

  • Example of a scraping definition with the file_sd_config method

scrape_configs:
# Example of running a script in a configuration file
  - job_name: any-scraping-job-name-1
    file_sd_configs:
      - files:
        - 'path-to-the-Script-exporter-discovery-configuration-file'
    metrics_path: /probe
    params:
      script: [scripts.name-in-the-Script-exporter-configuration-file]
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_nodelabel
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
 
# Example of running a script in a configuration file with additional arguments 1 and 2 added to the script
  - job_name: any-scraping-job-name-2
    file_sd_configs:
      - files:
        - 'path-to-the-Script-exporter-discovery-configuration-file'
    metrics_path: /probe
    params:
      script: [scripts.name-in-the-Script-exporter-configuration-file]
      argument-name-1: [argument-name-1-value]
      argument-name-2: [argument-name-2-value]
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_nodelabel
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
any-scraping-job-name

Specify a given scraping job name that is unique on the host, with 1 to 255 characters other than control characters.

path-to-the-Script-exporter-discovery-configuration-file

Specify the path to the Script exporter discovery configuration file (jpc_file_sd_config_script.yml).

any-category-name

Specify a category ID of the IM management node for the agent SID, with 1 to 255 characters other than control characters.

(3) Editing the definition file for Windows exporter

(a) Specifying monitored processes (required)

- Edit the Windows exporter configuration file (jpc_windows_exporter.yml)

Edit the Windows exporter configuration file (jpc_windows_exporter.yml) to define which processes are to be monitored.

By default, no process is to be monitored, and therefore you will specify the processes you want to monitor in the Windows exporter configuration file.

For details on the Windows exporter configuration file, see Windows exporter configuration file (jpc_windows_exporter.yml) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(b) Modifying monitoring metrics (optional)

- Edit the Prometheus configuration file (jpc_prometheus_server.yml)

If you want to change metrics to be collected, modify the metric_relabel_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) of JP1/IM - Agent in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

- Edit the Windows exporter (process monitoring) metric definition file (metrics_windows_exporter_process.conf)

If you want to change process monitoring metrics displayed in the Trends tab of the integrated operation viewer, modify the Windows exporter (process monitoring) metric definition file (metrics_windows_exporter_process.conf).

For details on the Windows exporter (process monitoring) metric definition file, see Windows exporter (process monitoring) metric definition file (metrics_windows_exporter_process.conf) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(4) Setting up Process exporter

(a) Specifying monitored processes (required)

- Edit the Process exporter configuration file (jpc_process_exporter.yml)

Edit the Process exporter configuration file (jpc_process_exporter.yml) to define which processes are to be monitored.

By default, no process is to be monitored, and therefore you will uncomment the initial settings and then specify the processes you want to monitor in the Process exporter configuration file.

For details on the Process exporter configuration file, see Process exporter configuration file (jpc_process_exporter.yml) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(b) Modifying monitoring metrics (optional)

- Edit the Prometheus configuration file (jpc_prometheus_server.yml)

If you want to change metrics to be collected, modify the metric_relabel_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) of JP1/IM - Agent in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

- Edit the Process exporter metric definition file (metrics_process_exporter.conf)

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Process exporter metric definition file (metrics_process_exporter.conf).

For details on the Process exporter metric definition file, see Process exporter metric definition file (metrics_process_exporter.conf) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(5) Setting up Yet another cloudwatch exporter (required)

Although IM Exporter provides support for more AWS services, there is no change in the setup procedure for Yet another cloudwatch exporter in JP1/IM Agent.

For details on the setup procedure for Yet another cloudwatch exporter, see the sections describing 2.19.2(7) Setup in Yet another cloudwatch exporter in 2.19 Setup for JP1/IM - Agent (for UNIX) and 1.19.3(1)(d) Settings of product plugin (for Windows)in 1.19 Setting up JP1/IM - Manager (for Windows).

However, for windows, place the credential file under "C:\Windows\System32\config\systemprofile" written in 2.19 Setup for JP1/IM - Agent (for UNIX) > 2.19.2(7) Setup in Yet another cloudwatch exporter > (b) Modify Setup to connect to CloudWatch (for Linux) (optional).

(a) Configuring the system node definition file (imdd_systemnode.conf) (required)

The following table shows a setting example of the system node definition file (imdd_systemnode.conf) when you create a system node for a new AWS service supported by IM Exporter. The system node is described under (e) Tree format in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide 12.5.1(1) Creating IM management nodes (__configurationGet method).

Table 10‒2: Setting example of the system node definition file (imdd_systemnode.conf)

Item

displayName

type

name

Amazon Elastic Container Service

JP1PC-AWS-ECS

[{".*":"regexp"}]

Amazon Elastic Block Store

JP1PC-AWS-EBS

[{".*":"regexp"}]

Amazon Elastic File System

JP1PC-AWS-EFS

[{".*":"regexp"}]

FSx File System

JP1PC-AWS-FSX

[{".*":"regexp"}]

Simple Notification Service

JP1PC-AWS-SNS

[{".*":"regexp"}]

Relational Database Service

JP1PC-AWS-RDS

[{".*":"regexp"}]

The following shows how the items in the above table can be defined in the system node definition file.

{
  "meta":{
    "version":"2"
  },
  "allSystem":[
    {
      "id":"ecs",
      "displayName":"Amazon Elastic Container Service",
      "objectRoot":[
        {
          "type":"JP1PC-AWS-ECS",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"ebs",
      "displayName":"Amazon Elastic Block Store",
      "objectRoot":[
        {
          "type":"JP1PC-AWS-EBS",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"efs",
      "displayName":"Amazon Elastic File System",
      "objectRoot":[
        {
          "type":"JP1PC-AWS-EFS",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"fsx",
      "displayName":"FSx File System",
      "objectRoot":[
        {
          "type":"JP1PC-AWS-FSX",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sns",
      "displayName":"Simple Notification Service",
      "objectRoot":[
        {
          "type":"JP1PC-AWS-SNS",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"rds",
      "displayName":"Relational Database Service",
      "objectRoot":[
        {
          "type":"JP1PC-AWS-RDS",
          "name":[{".*":"regexp"}]
        }
      ]
    }
  ]
}

(6) Setting up Promitor

If you use Promitor for monitoring, configure the following settings.

(a) Configuring the settings for establishing a connection to Azure (required)

- Modify the service definition file or the unit definition file

You specify the storage location of the Promitor configuration file with an absolute path in the PROMITOR_CONFIG_FOLDER environment variable. Modify this environment variable, which is found in the service definition file or the unit definition file. For details on the service definition file and the unit definition file, see the sections describing the applicable files under 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

- Modify the Promitor Scraper runtime configuration file (runtime.yaml)

In the Promitor Scraper runtime configuration file (runtime.yaml), specify the path to the Promitor Scraper configuration file (metrics-declaration.yaml) in metricsConfiguration.absolutePath. For details on the Promitor Scraper runtime configuration file (runtime.yaml), see the section describing the applicable file under 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

- Configure information for connecting to Azure

Configure authentication information used for Promitor to connect to Azure. For details on how to configure it, see (b) Configuring authentication information for connecting to Azure.

(b) Configuring authentication information for connecting to Azure

Promitor can connect to Azure through the service principal method or the managed ID method. Only the service principal method is available when Promitor is installed in hosts other than Azure Virtual Machines. Both the service principal method and the managed ID method are available when Promitor is installed in Azure Virtual Machines.

The following describes three procedures for connecting to Azure.

  • Service principal method

    This uses a client secret to connect to Azure.

  • Managed ID method (system-assigned)

    This uses a system-assigned managed ID to connect to Azure.

  • Managed ID method (user-assigned)

    This uses a user-assigned managed ID to connect to Azure.

- Using the service principal method to connect to Azure

Perform steps 1 to 3 in Azure Portal and then perform steps 4 to 6 on a host where Promitor has been installed.

  1. Create an application and issue a client secret.

  2. Obtain an application (client) ID in Overview for the application.

  3. Select a resource group (or subscription) to be monitored, and then select Access control (IAM) and Add role assignment.

  4. Add the client secret value under the Value column issued in step 1 to JP1/IM - Agent.

    Specify the values in the table below for keys used to register the secret.

    Secret registration key

    Value

    Promitor Resource Discovery key

    Promitor.resource_discovery.env.AUTH_APPKEY

    Promitor Scraper key

    Promitor.scraper.env.AUTH_APPKEY

    For details on how to register the secrets, see the description in 9.5.7(2) Adding, modifying, and removing secrets under 9.5.7 Secret obfuscation of JP1/IM - Agent in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    Important

    When building in a container environment, you cannot perform this step before you create a container image. Create a container, and then perform this step.

  5. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify ServicePrincipal for authentication.mode.

  6. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

- Using the managed ID method (system-assigned) to connect to Azure

Perform steps 1 to 3 in Azure Portal and then perform steps 4 to 6 on a host where Promitor has been installed.

  1. In Virtual Machines, select the Azure Virtual Machine where Promitor has been installed.

  2. Go to Identity and then System assigned, and change Status to On.

  3. In Identity - System assigned, under Permissions, select Azure role assignments and specify Monitoring Reader.

  4. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify SystemAssignedManagedIdentity for authentication.mode.

  5. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

- Using the managed ID method (user-assigned) to connect to Azure

Perform steps 1 to 5 in Azure Portal and then perform steps 6 and 7 on a host where Promitor has been installed.

  1. In the service search, select Managed Identities and then Create Managed Identity.

  2. Specify a resource group, name, and other information to create a managed ID.

  3. In Azure role assignments, assign Monitoring Reader.

  4. In Virtual Machines, select the Azure Virtual Machine where Promitor has been installed.

  5. Select Identity, User assigned, and then Add, and add the managed ID you created in step 2.

  6. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify UserAssignedManagedIdentity for authentication.mode.

  7. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

(c) Configuring a proxy-based connection to Azure (optional)

If your connection to Azure must be established via a proxy, use the HTTPS_PROXY environment variable. For details on how to set it up, see 2.19.2(7)(c) Connect to CloudWatch through a proxy (for Linux) (optional) in 2.19 Setup for JP1/IM - Agent (for UNIX). For NO_PROXY, specify the value of resourceDiscovery.host in the Promitor Scraper runtime configuration file (runtime.yaml).

(d) Configuring scraping targets (required)

- Configure monitoring targets that must be specified separately (required)

Monitoring targets can be detected automatically by default; however, some of the services, like the ones described below, must be detected manually. For these services to be detected, edit the Promitor Scraper configuration file (metrics-declaration.yaml) to specify your monitoring targets separately.

  • Services you must specify separately as monitoring targets

    These services are found as the ones with automatic discovery disabled in the table listing the services Promitor can monitor of 12.5.2(2)(f) Promitor in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  • How to specify monitoring targets separately

    Uncomment a monitoring target in the Promitor Scraper configuration file (metrics-declaration.yaml) and add it to the resources section.

- Change monitoring targets (optional)

In Promitor, monitoring targets can be specified in the following two ways:

  • Specifying monitoring targets separately

    If you want to separately specify Azure resources to be monitored, add them to the Promitor Scraper configuration file (metrics-declaration.yaml).

  • Detecting monitoring targets automatically

    If you want to detect resources in your tenant automatically and monitor Azure resources in them, add them to the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml).

- Change monitoring metrics (optional)

To change metrics to be collected or displayed:

  1. Confirm that Azure Monitor has collected the metric.

  2. Confirm that Azure Monitor has collected the metric you want to collect. As preparation for the settings in the next step, check the metric name and the aggregation type.

  3. For the metric name, see "Metric" in "Reference > Supported metrics > Resource metrics" in the Azure Monitor documentation. For the aggregation type, see "Aggregation Type" in "Reference > Supported metrics > Resource metrics" in the Azure Monitor documentation.

  4. Edit the settings in the Prometheus configuration file (jpc_prometheus_server.yml).

    If you want to change metrics to be collected, modify the metric_relabel_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

    For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) of JP1/IM - Agent in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

  5. Edit the settings in the Promitor metric definition file (metrics_promitor.conf).

    If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Promitor metric definition file (metrics_promitor.conf).

    For details on the Promitor metric definition file, see Promitor metric definition file (metrics_promitor.conf) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(e) Configuring labels for tenant information (optional)

Add labels for the tenant ID and subscription ID of a monitoring target to the property label definition file (property_labels.conf). Otherwise, the tenant and subscription show default in the property of an IM management node and an extended attribute of a JP1 event.

For details on the property label definition file, see Property label definition file (property_labels.conf) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Configuring the system node definition file (imdd_systemnode.conf) (required)

To create a system node described in (e) Tree format in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide 12.5.1(1) Creating IM management nodes (__configurationGet method), edit the system node definition file (imdd_systemnode.conf) to configure the setting items listed in the table below. You can specify any values to the items that are not listed in the table.

Table 10‒3: Settings in the system node definition file (imdd_systemnode.conf)

Item

Value

displayname

Specify the name of the service that publishes metrics for Azure Monitor.

type

It must be specified in uppercase characters as follows:

Azure-Azure-service-name

Azure-service-name is equivalent to one of the names under the Promitor resourceType name column in the table listing the services Promitor can monitor, in 12.5.2(2)(f) Promitor in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

name

This is specified as:

[{".*":"regexp"}]

The following table shows a setting example of the system node definition file when you create system management nodes for Azure services that are found by default as monitoring targets in the Promitor metric definition file (metrics_promitor.conf).

Table 10‒4: Setting example of the system node definition file (imdd_systemnode.conf)

Item

displayName

type

name

Azure Function App

JP1PC-AZURE-FUNCTIONAPP

[{".*":"regexp"}]

Azure Container Instances

JP1PC-AZURE- CONTAINERINSTANCE

[{".*":"regexp"}]

Azure Kubernetes Service

JP1PC-AZURE-KUBERNETESSERVICE

[{".*":"regexp"}]

Azure File Storage

JP1PC-AZURE-FILESTORAGE

[{".*":"regexp"}]

Azure Blob Storage

JP1PC-AZURE-BLOBSTORAGE

[{".*":"regexp"}]

Azure Service Bus Namespace

JP1PC-AZURE-SERVICEBUSNAMESPACE

[{".*":"regexp"}]

Azure Cosmos DB

JP1PC-AZURE-COSMOSDB

[{".*":"regexp"}]

Azure SQL Database

JP1PC-AZURE-SQLDATABASE

[{".*":"regexp"}]

Azure SQL Server

JP1PC-AZURE-SQLSERVER

[{".*":"regexp"}]

Azure SQL Managed Instance

JP1PC-AZURE-SQLMANAGEDINSTANCE

[{".*":"regexp"}]

Azure SQL Elastic Pool

JP1PC-AZURE-SQLELASTICPOOL

[{".*":"regexp"}]

Azure Logic Apps

JP1PC-AZURE-LOGICAPP

[{".*":"regexp"}]

The following shows how the items in the above table can be defined in the system node definition file.

{
  "meta":{
    "version":"2"
  },
  "allSystem":[
    {
      "id":"functionApp",
      "displayName":"Azure Function App",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-FUNCTIONAPP",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"containerInstance",
      "displayName":"Azure Container Instances",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-CONTAINERINSTANCE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"kubernetesService",
      "displayName":"Azure Kubernetes Service",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-KUBERNETESSERVICE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"fileStorage",
      "displayName":"Azure File Storage",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-FILESTORAGE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"blobStorage",
      "displayName":"Azure Blob Storage",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-BLOBSTORAGE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"serviceBusNamespace",
      "displayName":"Azure Service Bus Namespace",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SERVICEBUSNAMESPACE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"cosmosDb",
      "displayName":"Azure Cosmos DB",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-COSMOSDB",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlDatabase",
      "displayName":"Azure SQL Database",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLDATABASE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlServer",
      "displayName":"Azure SQL Server",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLSERVER",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlManagedInstance",
      "displayName":"Azure SQL Managed Instance",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLMANAGEDINSTANCE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlElasticPool",
      "displayName":"Azure SQL Elastic Pool",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLELASTICPOOL",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"logicApp",
      "displayName":"Azure Logic Apps",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-LOGICAPP",
          "name":[{".*":"regexp"}]
        }
      ]
    }
  ]
}

With the system node definition file configured, an IM management node is displayed under the system node that has the corresponding Azure service name, when the jddcreatetree is run and PromitorSID other than VirtualMachine is created. For PromitorSID of VirtualMachine, an IM management node is displayed under the node that represents the host, without the need for listing the name in the system node definition file.

For details on the system node definition file, see System node definition file (imdd_systemnode.conf) in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

(7) Setting up container monitoring

You use different features and setup procedures to monitor different monitoring targets. The following table lists what feature you will use and where you can see the setup procedure for each monitoring target.

Monitoring target

Feature you use

See the setup procedure in

Red Hat OpenShift

User-specific Prometheus

Subsections following this table.

Kubernetes

Amazon Elastic Kubernetes Service (EKS)

Azure Kubernetes Service (AKS)

Azure's monitoring feature (Promitor)

AKS can be monitored by default.

10.1.2(6) Setting up Promitor

(a) Configuring the settings of scraping through user-specific Prometheus (required)

  • Red Hat OpenShift

    Settings are not required.

    An openshift-monitoring project is installed and a scraping setting is added during installation.

  • Kubernetes and Amazon Kubernetes Service (EKS)

    When Prometheus is not installed, or when Prometheus is installed but the scraping targets listed in the following table are not configured, add a scraping setting.

    Scraping target

    Data you can retrieve

    Metric you can collect

    kube-stat-metrics

    Status data on nodes, pods, and workloads

    See the section describing Key metric items of 12.5.2(2) Red Hat OpenShift in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    node_exporter

    Node's performance data

    kubelet

    Pod's performance data

The subsequent steps are common to Red Hat OpenShift, Kubernetes, and Amazon Kubernetes Service (EKS).

(b) Configuring the settings for a connection (required)

Configure the remote write setting to collect information from user-specific Prometheus.

For details on how to modify the settings for each monitoring target, see (c) Modifying the Prometheus settings (Red Hat OpenShift) and (d) Modifying the Prometheus settings (Amazon Elastic Kubernetes Service (EKS)).

- global.external_labels section
  • jp1_pc_prome_hostname (required)

    Specify the name of the Prometheus host.

  • jp1_pc_prome_clustername (optional)

    Specify the name of the cluster.

    When this label is omitted, the system does not create an IM management node for the cluster.

(Example)

global:
external_labels:
  jp1_pc_prome_hostname: promHost
  jp1_pc_prome_clustername: myCluster
- remote_write section
  • Configure a connection destination

    Specify an endpoint of JP1/IM - Manager (Intelligent Integrated Management Base) as a remote write destination. Specify two endpoints and, in each section, enter (1) and (2) of the settings of the label required for container monitoring. When creating the cluster nodes, specify another endpoint and enter (3).

  • Configure labels necessary for container monitoring

    To give labels necessary for container monitoring, add the statement shown below to the write_relabel_configs section. It does not affect any local storage for user-specific Prometheus because it is relabeled during remote writing.

(1) Basic settings

  - source_labels: '[__name__]'
    regex: 'kube_job_status_failed|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
    action: 'keep'
  - source_labels: ['__name__','namespace']
    regex: '(kube_pod_|kube_job_|container_).*;(.*)'
    target_label: jp1_pc_nodelabel
    replacement: $2
  - source_labels: ['__name__','node']
    regex: 'kube_node_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','daemonset']
    regex: 'kube_daemonset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','deployment']
    regex: 'kube_deployment_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','replicaset']
    regex: 'kube_replicaset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','statefulset']
    regex: 'kube_statefulset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','owner_kind','owner_name']
    regex: 'kube_job_owner;CronJob;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_nodelabel
    replacement: Linux metric collector(Node exporter)
  - source_labels: ['__name__']
    regex: '(kube_pod_|kube_job_|container_).*;(.*)'
    target_label: jp1_pc_module
    replacement: kubernetes/Namespace
  - source_labels: ['__name__']
    regex: 'kube_node_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/Node
  - source_labels: ['__name__']
    regex: 'kube_daemonset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/DaemonSet
  - source_labels: ['__name__']
    regex: 'kube_deployment_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/Deployment
  - source_labels: ['__name__']
    regex: 'kube_replicaset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/ReplicaSet
  - source_labels: ['__name__']
    regex: 'kube_statefulset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/StatefulSet
  - source_labels: ['__name__','owner_kind']
    regex: 'kube_job_owner;CronJob'
    target_label: jp1_pc_module
    replacement: kubernetes/CronJob
  - source_labels: ['__name__']
    regex: 'kube_.*|container_.*'
    target_label: jp1_pc_trendname
    replacement: kubernetes
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_trendname
    replacement: node_exporter
  - source_labels: ['__name__']
    regex: 'kube_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Kube state metrics
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Node exporter
  - source_labels: ['__name__']
    regex: 'container_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Kubelet
  - source_labels: ['__name__']
    regex: 'kube_.*'
    target_label: job
    replacement: jpc_kube_state
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: job
    replacement: jpc_kube_node
  - source_labels: ['__name__']
    regex: 'container_.*'
    target_label: job
    replacement: jpc_kubelet
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_category
    replacement: platform
  - source_labels: ['job','instance']
    regex: 'jpc_kube_state;([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes state metric collector(Kube state metrics)
  - source_labels: ['job','instance']
    regex: 'jpc_kubelet;([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes resource metric collector(Kubelet)
  - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|failure_type|scope'
    action: 'labelkeep'

(2) Settings for creating the pod nodes

  - source_labels: ['__name__']
    regex: 'kube_pod_owner|kube_pod_status_phase|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes'
    action: 'keep'
  - source_labels: ['pod']
    target_label: jp1_pc_nodelabel
  - target_label: jp1_pc_module
    replacement: kubernetes/Pod
  - target_label: jp1_pc_trendname
    replacement: kubernetes
  - target_label: jp1_pc_exporter
    replacement: JPC Kube state metrics
  - target_label: job
    replacement: jpc_kube_state
  - source_labels: ['instance']
    regex: '([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes state metric collector(Kube state metrics)
  - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|failure_type|scope'

(3) Settings for creating the cluster nodes

- source_labels: ['__name__','jp1_pc_prome_clustername']
      regex: '(container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes);(.+)'
      action: 'keep'
    - source_labels: ['jp1_pc_prome_clustername']
      target_label: jp1_pc_nodelabel
    - target_label: jp1_pc_module
      replacement: kubernetes/Cluster
    - target_label: jp1_pc_trendname
      replacement: kubernetes
    - target_label: jp1_pc_exporter
      replacement: JPC Kubelet
    - target_label: job
      replacement: jpc_kubelet
    - source_labels: ['instance']
      regex: '([^:]+):?(.*)'
      target_label: jp1_pc_remote_monitor_instance
      replacement: ${1}:Kubernetes state metric collector(Kubelet)
    - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|failure_type|scope'
      action: labelkeep

(c) Modifying the Prometheus settings (Red Hat OpenShift)

- Prerequisites

  • As a user with the cluster-admin role, you can access the cluster.

  • OpentShift CLI (oc) has been installed.

- Procedure

  1. Check whether a ConfigMap object is created.

    $ oc -n openshift-monitoring get configmap cluster-monitoring-config
  2. If the ConfigMap object is not created, create a new file.

    $ vi cluster-monitoring-config.yaml
  3. If the ConfigMap object is created, edit a cluster-monitoring-config object in the openshift-monitoring project.

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  4. Add the settings in camel case to data/config.yaml/prometheusK8s.

    (Example)

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          externalLabels:
            jp1_pc_prome_hostname: promHost
            jp1_pc_prome_clustername: myCluster
          remoteWrite:
          - url: http://host-name-of-JP1/IM - Manager (Intelligent Integrated Management Base):20703/im/api/v1/trendData/write
            writeRelabelConfigs:
            - sourceLabels: '[__name__]'
              regex: 'kube_job_status_failed|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
              action: keep
      - source_labels: ['__name__','namespace']
        regex: '(kube_pod_|kube_job_|container_).*;(.*)'
        target_label: jp1_pc_nodelabel
        replacement: $2
      - source_labels: ['__name__','node']
        regex: 'kube_node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','daemonset']
        regex: 'kube_daemonset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','deployment']
        regex: 'kube_deployment_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','replicaset']
        regex: 'kube_replicaset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','statefulset']
        regex: 'kube_statefulset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','owner_name']
        regex: 'kube_job_owner;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','instance']
        regex: 'node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_trendname
        replacement: kube_state_metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_trendname
        replacement: node_exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_trendname
        replacement: kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kube state metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Node exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: job
        replacement: jpc_kube_state
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: job
        replacement: jpc_node
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: job
        replacement: jp1_kubelet
  5. Save the file and apply the changes to the ConfigMap object.

    $ oc apply -f cluster-monitoring-config.yaml

(d) Modifying the Prometheus settings (Amazon Elastic Kubernetes Service (EKS))

- Procedure

  1. Create a yml file with a given name (example: my_prometheus_values.yml) and add the settings to the server section.

    • Settings in external_labels

      Add them to the global.external_labels section.

    • Settings in remote_write

      Add them to the remoteWrite section.

    (Example)

    server:
      global:
        external_labels:
          jp1_pc_prome_hostname: promHost
          jp1_pc_prome_clustername: myCluster
      remoteWrite:
        - url: http://host-name-of-JP1/IM - Manager (Intelligent Integrated Management Base):20703/im/api/v1/trendData/write
            write_relabel_configs:
            - sourceLabels: '[__name__]'
              regex: 'kube_job_status_failed|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
              action: keep
      - source_labels: ['__name__','namespace']
        regex: '(kube_pod_|kube_job_|container_).*;(.*)'
        target_label: jp1_pc_nodelabel
        replacement: $2
      - source_labels: ['__name__','node']
        regex: 'kube_node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','daemonset']
        regex: 'kube_daemonset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','deployment']
        regex: 'kube_deployment_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','replicaset']
        regex: 'kube_replicaset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','statefulset']
        regex: 'kube_statefulset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','owner_name']
        regex: 'kube_job_owner;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','instance']
        regex: 'node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_trendname
        replacement: kube_state_metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_trendname
        replacement: node_exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_trendname
        replacement: kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kube state metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Node exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: job
        replacement: jpc_kube_state
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: job
        replacement: jpc_node
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: job
        replacement: jp1_kubelet
  2. Apply the changes.

    helm upgrade prometheus-chart-name prometheus-community/prometheus -n prometheus-namespace -f  my_prometheus_values_yaml

(e) Configuring scraping targets (optional)

- Change monitoring targets

If you want to monitor only a part of the monitoring targets in a user environment with JP1/IM, specify the monitoring targets in the write_relabel_configs section. See the following examples.

(Example 1) Specifying a whitelist of specific resources

  - source_labels: ['__name__','pod']
    regex: '(kube_pod_|container_).*;coredns-.*|prometheus'
    action: 'keep'

(Example 2) Specifying a blacklist of all resources

  - source_labels: ['jp1_pc_nodelabel']
    regex: 'coredns-.*|prometheus'
    action: 'drop'

In addition, to monitor metrics that have already been collected in a different aggregation type, add the remote_write section and define it as the type to be monitored.

- Change monitoring metrics

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the metric definition files.

The files you need to edit are as follows:

  • Node exporter metric definition file (metrics_node_exporter.conf)

  • Container monitoring metric definition file (metrics_kubernetes.conf)

For details on these metric definition files, see the sections describing the applicable files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Configuring the system node definition file (imdd_systemnode.conf) (required)

To create a system node described in (e) Tree format in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide 12.5.1(1) Creating IM management nodes (__configurationGet method), edit the system node definition file (imdd_systemnode.conf) to configure the setting items listed in the table below. You can specify any values to the items that are not listed in the table.

Table 10‒5: Settings in the system node definition file (imdd_systemnode.conf)

Item

Value

displayname

This specifies the name of the Kubernetes component.

type

It must be specified in uppercase characters as follows:

Kubernetes-Kubernetes-component-name

Kubernetes-component-name is equivalent to one of the names under the Component name column of the section describing the component names monitored by Kubernetes in 12.5.2(2)(i) Kubernetes in the manual JP1/Integrated Management 3 - Manager Overview and System Design Guide.

name

This is specified as:

[{".*":"regexp"}]

The following table shows a setting example of the system node definition file when you create system management nodes for Kubernetes components that are found by default as monitoring targets in the container monitoring metric definition file.

Table 10‒6: Setting example of the system node definition file (imdd_systemnode.conf)

Item

displayName

type

name

Clusters

JP1PC-KUBERNETES-CLUSTER

[{".*":"regexp"}]

Nodes

JP1PC-KUBERNETES-NODE

[{".*":"regexp"}]

Namespaces

JP1PC-KUBERNETES-NAMESPACE

[{".*":"regexp"}]

Deployments

JP1PC-KUBERNETES-DEPLOYMENT

[{".*":"regexp"}]

DaemonSets

JP1PC-KUBERNETES-DAEMONSET

[{".*":"regexp"}]

ReplicaSets

JP1PC-KUBERNETES-REPLICASET

[{".*":"regexp"}]

StatefulSets

JP1PC-KUBERNETES-STATEFULSET

[{".*":"regexp"}]

CronJobs

JP1PC-KUBERNETES-CRONJOB

[{".*":"regexp"}]

Pods

JP1PC-KUBERNETES-POD

[{".*":"regexp"}]

The following shows how the items in the above table and Kubernetes at a higher level can be defined in the system node definition file.

{
  "meta":{
    "version":"2"
  },
  "allSystem":[
    {
      "id":"kubernetes",
      "displayName":"Kubernetes",
      "children":[
        {
          "id":"cluster",
          "displayName":"Clusters",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-CLUSTER",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"namespace",
          "displayName":"Namespaces",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-NAMESPACE",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"node",
          "displayName":"Nodes",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-NODE",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"deployment",
          "displayName":"Deployments",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-DEPLOYMENT",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"daemonset",
          "displayName":"DaemonSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-DAEMONSET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"replicaset",
          "displayName":"ReplicaSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-REPLICASET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"statefulset",
          "displayName":"StatefulSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-STATEFULSET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"cronjob",
          "displayName":"CronJobs",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-CRONJOB",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"pod",
          "displayName":"Pods",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-POD",
              "name":[{".*":"regexp"}]
            }
          ]
        }
      ]
    }
  ]
}

With the system node definition file specified, an IM management node is displayed under the system node that has the corresponding Kubernetes component name, when the jddcreatetree is run.

For details on the system node definition file, see System node definition file (imdd_systemnode.conf) of JP1/IM - Manager in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

(8) Editing the Script exporter definition file (optional)

(a) Specifying scripts as monitoring targets (required)

- Edit the Script exporter configuration file (jpc_script_exporter.yml)

Edit the Script exporter configuration file (jpc_script_exporter.yml) to define which scripts are to be monitored.

For details on the Script exporter configuration file, see Script exporter configuration file (jpc_script_exporter.yml) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(b) Modifying monitoring metrics (optional)

- Edit the Prometheus configuration file (jpc_prometheus_server.yml)

If you want to add metrics collected from the scripts, add them to the metric_relabel_config section in the Prometheus configuration file (jpc_prometheus_server.yml).

For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) of JP1/IM - Agent in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference (2. Definition Files).

scrape_configs:
  - job_name: jpc_script_exporter
      ...
    metric_relabel_configs:
      - source_labels:['__name__']
        regex: 'script_success|script_duration_seconds|script_exit_code[Add metrics here]'
- Edit the Script exporter metric definition file (metrics_script_exporter.conf)

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Script exporter metric definition file (metrics_script_exporter.conf).

For details on the Script exporter metric definition file, see Script exporter metric definition file (metrics_script_exporter.conf) of 10. IM Exporter definition files in the manual JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(9) Setting up log metrics definition in JP1/IM - Manager (optional)

If you want to show time-series data on log metrics when trend information of nodes is displayed in the Trends tab of the integrated operation viewer of JP1/IM - Manager through the log metrics feature, define log metrics to be shown in JP1/IM - Manager.

Use user-specific metric definitions for the log metrics definitions here.

(10) Specifying a listening port number and listening address (optional)

If you neither change the default port numbers nor limit IP addresses to listen to, you can skip this step.

For details on what you should modify if you want to change port numbers or IP addresses to listen to, see 13.2.2 Procedure required to change the Exporter listening port and 13.2.3 Procedure required to change the listening port for scraping used by the log metrics feature of 13.2 Changing the IM Exporter configuration in the manual JP1/Integrated Management 3 - Manager Administration Guide.

(11) Alive Monitoring Processes by Windows exporter (optional)

The following table lists the monitoring targets for the add-on programs of IM Exporter.

Table 10‒7: Processes monitored by the Windows exporter

Service

Processes to monitor

Monitored Name

jpc_script_exporter_service#

Agent path\bin\jpc_script_exporter_service.exe

Monitoring target 1:Script exporter

Monitoring target 2:script_exporter

jpc_promitor_scraper_service#

Agent path\bin\jpc_promitor_scraper_service.exe

Monitoring target 1:RM Promitor

Monitoring target 2:promitor_scraper

jpc_promitor_resource_discovery_service#

Agent path\bin\jpc_promitor_resource_discovery_service.exe

Monitoring target 1:RM Promitor

Monitoring target 2:promitor_resource_discovery

#

Indicates Windows service program.

For the example of the alert definition using monitoring target 1 and monitoring target 2, see the section describing 1.21.2(9)(b) Alive Monitoring Processes by Windows exporter.