Hitachi

JP1 Version 13 JP1/Integrated Management 3 - Manager Configuration Guide


1.21.2 Settings of JP1/IM - Agent

Organization of this subsection

(1) Common way for Setting

(a) Edit the configuration files (for Windows)

Configuration file is stored in conf directory. There are two ways to modify the content of configuration file:

  • The way to use integrated operation viewer

  • The way to Login and Setup the Hosts

About setting files that can be edited when you are using integrated operation viewer, see the notes of the definition file about JP1/IM - Agent (JP1/IM agent control base) in List of definition files in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If you Login host and Setup, All configuration files can be edited.

■ How to use integrated operation viewer

  1. Download configuration file from integrated operation viewer.

    Select file you want to edit from integrated operation viewer and download it.

    If you want to add a defined file that you can optionally create, do the following:

    1. Download user-created definition file list definition file.

    2. Write information about definition file you want to add in user-created definition file list definition file

    3. Upload user-created definition file list definition file.

    4. Upload definition file that you want to add.

  2. Edit the downloaded file.

    Note

    Because format check can be done for Prometheus server defined file with promtool command, it is recommended to be checked at this point.

    promtool command is included with Prometheus server. Prometheus server can be downloaded from GitHub website. Use the same version as Prometheus server that came with your JP1/IM - Agent.

    Version of add-on program of JP1/IM - Agent can be checked in the add-on function list in the List of Integrated Agents window of integrated operation viewer or in the addon_info.txt file stored in "Agent-path\addon_management\add-on-name\".

  3. Upload the edited file with integrated operation viewer.

    Setup is automatically reflected when uploaded.

■ How to Login and Setup the Hosts

  1. Login to integrated agent host.

  2. Stop JP1/IM - Agent servicing.

  3. Edit the configuration files.

    Note

    Because format check can be done for Prometheus server defined file with promtool command, it is recommended to be checked at this point.

  4. Start JP1/IM - Agent service.

(b) Changing service definition file (for Windows)

Service definition file storage destination and file name are as follows:

  • Storage destination: Installation-destination-folder\jp1ima\bin\

  • File name: jpc_service name_service.xml

Important
  • If you make changes to service definition file items, you will need to restart the service or reinstall the service# to apply the changes. For details about what you need to do to import each items, see When the definitions are applied in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  • If you change any of the items that require service reinstallation#, you must disable registration of the service and enable again after that. For details about how to enable or disable registration of service, see 1.21.1(1) Enable or disable add-on program.

#

Reinstalling a service means that you delete the service and then create the service again (using the jpc_service command.).

To change service definition file, follow these steps:

  1. Login to integrated agent host.

  2. Stop JP1/IM - Agent service.

  3. Edit service definition file.

  4. Start JP1/IM - Agent service.

(c) Change command-line options (for Windows)

Change the command-line options in service definition file <arguments> tag.

For how to edit, see 1.21.2(1)(b) Changing service definition file (for Windows).

(2) Setup for JP1/IM agent control base

(a) Change Integrated manager to connect to (for Windows) (optional)

  1. Stop JP1/IM - Agent service.

  2. Change Integrated manager to connect to.

    Change the destination Integration Manager defined in imagent common configuration file (jpc_imagentcommon.json) to the new destination.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    If you want to change the target Integration Manager from the secondary JP1/IM agent control base using the multi-manager host data delivery feature, change the defi nition under the immgr_add.secondary instead of immgr.

  3. Remove integrated agent host

    In the integrated operation viewer List of Integrated Agents window, select the checkbox next to integrated agent host whose access point name you want to change, and then click the Remove button.

    For details, see 2.2.1 List of Integrated Agents window in the JP1/Integrated Management 3 - Manager GUI Reference.

  4. Check initial secret.

    Check initial secret (secret for first-time connection) with integrated operation viewer in Integrated manager hosting.

    For details, see 2.2.3 Show Initial Secret window in the JP1/Integrated Management 3 - Manager GUI Reference.

  5. Obfuscate and register initial secret.

    In the Secret Manager command, obfuscate initial secret and register it.#

    jimasecret -add -key immgr.initial_secret -s " initial secret "
  6. Delete Individual secret.

    In the Secret Management command, Delete Individual secrets.#

    jimasecret -rm -key immgr.client_secret
  7. Modify a certificate

    For details on how to change CA certificate, see 1.21.2(2)(c) Place CA certificate (for Windows) (optional).

    This step is not required if authentication station that issued the server certificate for Integrated manager to which the old connection was made and imbase to which the new connection was made are the same.

  8. Start JP1/IM - Agent.

#

If you want to change the Integration Manager to which the secondary JP1/IM agent control base connects with the multi-manager host, see 3.15.7(5)(e) Keys for initial secret, client secret, and HTTP Proxy Passwords in the JP1/Integrated Management 3 - Manager Overview and System Design Guide for details about what to specify for-key option.

(b) Change the port (for Windows) (optional)

The listen port that JP1/IM agent control base uses is specified in imagent configuration file (jpc_imagent.json) and imagentproxy configuration file (jpc_imagent_proxy.json).

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

For details about the default port number, see Appendix C.1(2) Port numbers used by JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

Important
Important

If you change port number, you must also review setup for the firewall. For details, see 1.21.2(17) Firewall's Setup (for Windows) (required).

(c) Place CA certificate (for Windows) (optional)

To encrypt the communication between JP1/IM agent management base and JP1/IM agent control base, specify the tls_config of imagent shared configuration file (jpc_imagentcommon), and then configure the following settings. When configuring secondary JP1/IM agent control base settings for the multi-manager host data delivery feature, set the immgr_add.secondary. instead of the top-level tls_config.

If you do not want to encrypt, this setup is not required.

For instructions on deploying a CA certificate, see 9.4.5 Settings for JP1/IM - Agent (JP1/IM agent control base).

■ To verify the server certificate of JP1/IM agent management base

  1. Place CA certificate.

    Place CA certificate of authentication station that issued the server certificate of imbase you are connecting to in the following directory:

    • In Windows

      Agent-path\conf\user\cert\

    • In Linux

      /opt/jp1ima/conf/user/cert/

  2. Provide CA certificate path in imagent Common configuration file (jpc_imagentcommon.json).

  3. Restart imagent and imagentproxy.

■ Not to verify the server certificate of JP1/IM agent management base

  1. Set "true" in the tls_config.insecure_skip_verify of imagent shared configuration file (jpc_imagentcommon.json) tls_config.insecure_skip_verify.

(d) Modify settings related to Action Execution (for Windows) (optional)

Setup for Action Execution is defined in imagent configuration file (jpc_imagent.json).

For details about how to set, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(e) Setup the proxy authentication's authentication ID and Password (for Windows) (optional)

If there is a proxy server between agent host and manager host that requires Basic authentication, authentication ID and Password must be setup.

Set authentication ID to the immgr.proxy_user of imagent shared configuration file (jpc_imagentcommon.json). For details about Setting of each definition files, see 1.21.2(1)(a) Edit the configuration files (for Windows).

You set Password in the following ways: For details, see the explanation for each item.

  • Secret management command

    For details, see jimasecret in Chapter 1. Commands in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  • List of Secrets dialog box of integrated operation viewer

    For details, see 2.2.2(4) List of Secrets dialog box in the JP1/Integrated Management 3 - Manager GUI Reference.

  • Integrated operation viewer Secret Management REST API

    For details, see 5.4.3 Initial secret issue in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Change the user of Action Execution (for Windows) (required)

Change action.username and action.domainname in imagent configuration file (jpc_imagent.json). For setup procedure, see 1.21.2(1)(a) Edit the configuration files (for Windows). Furthermore, the password of the defined user is necessary to be registered using the jimasecret command.

(g) Configuring the event-forwarding relay function (for Windows) (optional)

If you use event-forwarding relay function, JP1/IM - Agent must be 13-01 or later.

Before configuring JP1/IM - Agent settings, configure JP1/IM - Manager settings for event-forwarding relay function settings. After completing JP1/IM - Manager settings, configure JP1/IM - Agent settings, and then configure JP1/Base settings. Finally, the tree should be updated in JP1/IM - Manager.

For details on JP1/IM - Manager setting, see Setup event-forwarding relay function (optional) in 1.19.3(1)(b) Change settings of JP1/IM agent management base (for Windows).

The following steps describe how to configure JP1/IM - Agent and JP1/Base:

■ Setup of JP1/IM - Agent

  1. Stop JP1/IM - Agent service.

    For a cluster configuration, stop the service from the cluster software.

  2. Open imagent configuration file (jpc_imagent.json) and set the jp1base_forward_recieve items.

    • When JP1/IM - Agent 13-10 or later is newly installed

      The jp1base_forward_recieve items are listed in imagent configuration file, but because they are commented like "//jp1base_forward_recieve", remove the leading "//".

    • Upgrading from a version earlier than JP1/IM - Agent 13-10

      Imagent configuration file refers to imagent configuration file model file, because the jp1base_forward_recieve items are not listed. When writing, remove the leading "//".

  3. Set port below the jp1base_forward_recieve items in imagent configuration file.

  4. Start JP1/IM - Agent service.

    For a cluster configuration, start the service from the cluster software.

■ Setup of JP1/IM - Base

  1. Stop JP1/Base.

    For a cluster configuration, stop from the cluster software.

  2. Register imagent in remote-server of the event server configuration file (conf).

    Select and define the following

    remote-server Event server name Communication type Address Port designation
    Event server name

    Fixed value Specify "imagent".

    Communication type

    Fixed value Specify "keep-alive".

    Address

    Specify the host name (physical host name or logical host name) or IP address of the local host.

    If the hostname cannot be resolved by a jp1hosts or jp1hosts2, specify IP address.

    Port designation

    Specify the port number specified in Setup of JP1/IM - Agent.

  3. Sets the forwarding configuration file (forward).

    Select and define the following

    to imagent
    Event Filter
    end-to
  4. Start JP1/Base.

    For a cluster configuration, start from the cluster software.

■ Updating the tree

  1. Refresh Intelligent Integrated Management Base tree in JP1/IM - Manager.

    If event-forwarding relay source IM management node is not displayed when the tree is refreshed, check that jima_message.log of event relay source imagent is not erroneous.

(h) Releasing event-forwarding relay function (for Windows)

Configure JP1/Base settings, then configure JP1/IM - Agent settings. Finally, the tree should be updated in JP1/IM - Manager.

■ Setup of JP1/IM - Base

  1. Stop JP1/Base.

    For a cluster configuration, stop from the cluster software.

  2. Editing the forwarding configuration file (forward) to release forwarding events to imagent.

  3. Start JP1/Base.

    For a cluster configuration, start from the cluster software.

■ Setup of JP1/IM - Agent

  1. Stop JP1/IM - Agent service.

    For a cluster configuration, stop the service from the cluster software.

  2. Open imagent configuration file (jpc_imagent.json) and comment or remove jp1base_forward_recieve items.

  3. Start JP1/IM - Agent service.

    For a cluster configuration, start the service from the cluster software.

■ Updating the tree

  1. Refresh Intelligent Integrated Management Base tree in JP1/IM - Manager.

    If event-forwarding relay source IM management node is not displayed when the tree is refreshed, check that jima_message.log of event relay source imagent is not erroneous.

(i) Setting the data delivery function to multiple manager hosts (for Windows) (optional)

■ New setting

  1. Shut down JP1/IM - Agent servicing.

    The service key of the service to be stopped is shown below.

    • jpc_imagent

    • jpc_imagentaction

    • jpc_imagentproxy

    • jpc_alertmanager

    • jpc_prometheus_server

    • jpc_fluentd

  2. Create folders for JP1/IM agent management base secondary.

    Create the following folders as secondary folders for JP1/IM agent management base:

    • Agent-path\tmp\download-imagent-group-identifier

    • Agent-path\tmp\upload-imagent-group-identifier

    • Agent-path\tmp\jbsfwd-imagent-group-identifier

    • Agent-path\tmp\imagent-imagent-group-identifier

    • Agent-path\logs\imagentproxy-imagent-group-identifier

    • Agent-path\logs\imagentaction-imagent-group-identifier

    To create the above folder, use the following command:

    mkdir folders
  3. Create a file for JP1/IM agent management base secondary.

    Copy the source file in Agent-path\bin and rename it to the destination file name.

    Source file name

    Destination file name

    jpc_imagent_service.exe

    jpc_imagent-imagent-group-identifier_service.exe

    jpc_imagent_service.xml

    jpc_imagent-imagent-group-identifier_service.xml

    jpc_imagentproxy_service.exe

    jpc_imagentproxy-imagent-group-identifier_service.exe

    jpc_imagentproxy_service.xml

    jpc_imagentproxy-imagent-group-identifier_service.xml

    jpc_imagentaction_service.exe

    jpc_imagentaction-imagent-group-identifier_service.exe

    jpc_imagentaction_service.xml

    jpc_imagentaction-imagent-group-identifier_service.xml

  4. Configure settings for JP1/IM agent control base secondary.

    See 3.15.7(5)(d) configuration file of JP1/IM agent control base in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  5. Register a secret for the secondary of JP1/IM agent control base.

    Obtain and register initial secret of JP1/IM - Manager to which the secondary JP1/IM agent control base connects.

    If you connect to JP1/IM - Manager through a HTTP proxy server that requires password-based HTTP authentication (Basic authentication), register HTTP proxy password as secret.

    For the keys of initial secret to register and the keys of HTTP proxy password, see 3.15.7(5)(e) Keys for initial secret, client secret, and HTTP Proxy Passwords in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  6. Perform settings other than JP1/IM agent control base.

    See 3.15.7(5)(f) Send settings for programs other than JP1/IM agent control base in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  7. Configure the service startup settings.

    See 3.15.7(5)(c) Configuring Secondary Service Startup in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  8. Run the following command to enable the service:

    Agent-path\tools\jpc_service -on service-key

    For the service key, use the service key of JP1/IM - Agent with -imagent-group-identifier in Windows described in 10.1 JP1/IM - Agent Service in the JP1/Integrated Management 3 - Manager Administration Guide.

  9. Start JP1/IM - Agent servicing.

    The service keys of the service to be started are shown below.

    • jpc_imagent

    • jpc_imagentaction

    • jpc_imagentproxy

    • jpc_imagent-imagent-group-identifier

    • jpc_imagentaction-imagent-group-identifier

    • jpc_imagentproxy-imagent-group-identifier

    • jpc_alertmanager

    • jpc_prometheus_server

    jpc_fluentd

  10. Set auto start at OS startup.

    For details about enabling auto start, see 1.21.1(2)(a) Enable for Auto-start.

■ Setting at version upgrade

If you are upgrading from a version that has data delivery function to multiple manager hosts enabled, perform the steps below.

Perform the following procedure after step 5 (before step 6) of 1.3.1(3)(b) Install instructions at the time of version upgrade installation.

  1. Create a file for JP1/IM agent management base secondary.

    Copy the source file in Agent-path\bin and rename it to the destination file name.

    Source file name

    Destination file name

    jpc_imagent_service.exe

    jpc_imagent-imagent-group-identifier_service.exe

    jpc_imagentproxy_service.exe

    jpc_imagent-imagent-group-identifier_service.exe

    jpc_imagentaction_service.exe

    jpc_imagent-imagent-group-identifier_service.exe

(j) Deactivating the data delivery function to multiple manager hosts (for Windows) (optional)

The following describes the procedure for configuring the data delivery to multiple manager hosts when the data delivery function is no longer used.

Perform one of the following depending on your operating environment.

  • Remove only the data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent

  • Uninstall JP1/IM - Agent itself.

■ To remove only the data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent

  1. Stop all JP1/IM - Agent servicing.

    If the service is running, execute the following command to stop it.

    Agent-path\tools\jpc_service_stop -s all
  2. Disable the secondary service.

    Disable the secondary services for JP1/IM agent control base by running the following command:

    Agent-path\tools\jpc_service -off Service-Key

    For details on how to disable the service, see step 2 and step 4 in 1.21.1(1)(b) Disable add-on program.

    For service keys, the service key of JP1/IM - Agent in Windows described in 10.1 JP1/IM - Agent Service in the JP1/Integrated Management 3 - Manager Administration Guide with -imagent group identifier is applicable.

  3. Delete the secret that you created to use the data delivery to multiple manager hosts feature.

    Remove the secret for the secondary JP1/IM agent control base that you registered in step 5 of 1.21.2(2)(i) Setting the data delivery function to multiple manager hosts (for Windows) (optional).

    For the key of the secret to be deleted, see 3.15.7(5)(e) Keys for initial secret, client secret, and HTTP Proxy Passwords in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    This step is not necessary for secrets that have already been deleted.

  4. The settings for using the data delivery function to multiple manager hosts.

    Restore the settings for JP1/IM agent control base secondary set in step 4 of 1.21.2(2)(i) Setting the data delivery function to multiple manager hosts (for Windows) (optional).

  5. Delete the files copied to use the data delivery function to multiple manager hosts.

    Remove the files for the secondary JP1/IM agent management base that you copied in step 3 of 1.21.2(2)(i) Setting the data delivery function to multiple manager hosts (for Windows) (optional).

  6. Delete the folder created to use the data delivery function to multiple manager hosts.

    After saving the log and other required files, delete the folders for the secondary JP1/IM agent management base created in step 2 of 1.21.2(2)(i) Setting the data delivery function to multiple manager hosts (for Windows) (optional).

  7. Deletes integrated agent registered on JP1/IM - Manager to which the disabled secondary service is connected.

    For details on deleting integrated agent data, see 2.2.1 List of Integrated Agents window in the JP1/Integrated Management 3 - Manager GUI Reference.

■ When uninstalling JP1/IM - Agent

Follow the procedure from step 3 to step 6 in To remove only the data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent above between steps 1 and 2 in 1.25.1(5) How to uninstall JP1/IM - Agent.

Follow step 7 in To remove only the data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent above.

(3) Setup of Prometheus server

(a) Changing Ports (For Windows) (optional)

The listen port used by Prometheus server is specified in --web.listen-address option of prometheus command.

For details about how to change prometheus command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --web.listen-address option, see prometheus command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20713". If port number is changed, review setup of the firewall and prohibit accessing from outside. However, if you want to monitor Prometheus server with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In such cases, consider security measures such as limiting the source IP address as required.

(b) To Add the alert definition (for Windows) (optional)

Alert definitions are defined in alert configuration file (jpc_alerting_rules.yml).

For details on how to edit alert configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

For details about the items that require setup in the alert definition, see Alert rule definition for converting to JP1 events in 3.15.1(3)(a) Alert evaluation function in the JP1/Integrated Management 3 - Manager Overview and System Design Guide. For details of the individual items and sample for the alert definitions, see Alert configuration file (jpc_alerting_rules.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

Note

The following are the points for when you create an alert definition:

  • Monitoring performance function on JP1/IM - Agent allows you to specify a duration (for). If the alert condition is met continuously during the specified period, it is judged as a firing.

  • If you want to detect that metric is present, use absent() function in the alerting criteria.

    absent(metric{label})

  • If you want the alert to be enabled for a certain duration, use PromQL for the alert criteria to setup it.

    (Example) When monitoring from 8 o'clock to 12 o'clock in Japan time

    alert-condition and ON() (23 <= hour() or 0 <= hour() < 3)

    Note that "hour()" returns UTC time, so you need to consider UTC.

  • Monitoring performance function on JP1/IM - Agent notifies you of the firing and resolved. If you want to be notified in two stages: Warning and abnormal, create alerts for Warning and alerts for abnormal.

  • Message that is displayed when an alert occurs can include the following:

    - Message at firing

    - Message on resolved

  • For details about the variables that can be embedded in alert Message, see 3.15.1(3)(a) Alert evaluation function in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

Important

Please set alert definitions so that total number of time-series data sets that each alert definitions assesment is less than 150.

For example, when 3 alerts of Windows network interfaces are defined and there are 4 network interfaces on Windows host, 4 time-series data sets collected by Windows exporter are generated.

Therefore, it means 4 time-series data sets are assesmented by 3 alerts, total number of time-series data comes to 4*3 =12.

(c) Add Blackbox exporter scrape job (for Windows) (optional)

Prior to Add a Blackbox exporter scrape job, you must add the module to configuration file on Blackbox exporter. For details, see 1.21.2(6)(b) Add, change, and delete modules (for Windows) (optional).

After you Add the module, perform the following steps to setup a scrape job that scrape the newly created module:

  1. Create a discovery configuration file for your Blackbox exporter.

    Copy the original model File shown below and rename it to the definition File of Copy destination to create a discovery configuration file for Blackbox exporter.

    - When performing HTTP/HTTPS monitoring

    • For Windows:

      Copy source: Agent-path\conf\jpc_file_sd_config_blackbox_http.yml.model

      Copy to: Agent-path\conf\file_sd_config_blackbox_module-name-begins-with-http.yml

    • For Linux:

      Copy source: /opt/jp1ima/conf/jpc_file_sd_config_blackbox_http.yml.model

      Copy to: /opt/jp1ima/conf/file_sd_config_blackbox_module-name-begins-with-http.yml

    - When performing ICMP monitoring

    • For Windows:

      Copy source: Agent-path\conf\jpc_file_sd_config_blackbox_icmp.yml.model

      Copy to: Agent-path\conf\file_sd_config_blackbox_module-name-begins-with-icmp.yml

    • For Linux:

      Copy source: /opt/jp1ima/conf/jpc_file_sd_config_blackbox_icmp.yml.model

      Copy to: /opt/jp1ima/conf/file_sd_config_blackbox_module-name-begins-with-icmp.yml

    The module name indicates the module that was added by 1.21.2(6)(b) Add, change, and delete modules (for Windows) (optional).

  2. Edit the discovery configuration files in Blackbox exporter.

    • For monitoring HTTP/HTTPS Discovery configuration file

    For descriptions, see Blackbox exporter (HTTP/HTTPS monitoring) discovery configuration file (jpc_file_sd_config_blackbox_http.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    • For monitoring ICMP Discovery configuration file

    For descriptions, see Blackbox exporter (ICMP monitoring) discovery configuration file (jpc_file_sd_config_blackbox_icmp.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  3. Use integrated operation viewer to add definition File.

    For instructions on how to add a definition File, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  4. Add a scrape job to Prometheus configuration file.

    • When performing HTTP/HTTPS monitoring

    In Prometheus configuration file (jpc_prometheus_server.yml), Copy the Definition of Scrape Job with Job Name "jpc_blackbox_http "to add a new scrape job.

    • When performing ICMP monitoring

    In Prometheus configuration file (jpc_prometheus_server.yml), Copy definition of Scrape Job with Job Name "jpc_blackbox_icmp" to Add a new scrape job.

    <Sample Setup>

    scrape_configs:
      - job_name: any-scrape-job-name
        metrics_path: /probe
        params:
          module: [module-name]
        file_sd_configs:
          - files:
            - 'discovery-configuration-file-name'
        relabel_configs:
          (Omitted)
        metric_relabel_configs:
          (Omitted)
    any-scrap-job-name

    Specify any name that does not overlap with any other scrape job name, in the range of 1 to 255 characters, except for control characters.

    module-name

    Specify the module name that was added in 1.21.2(6)(b) Add, change, and delete modules (for Windows) (optional).

    discovery-configuration-file-name

    Specify File that you created in step 1.

    For descriptions of Prometheus configuration file, see <scrape_config> in Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about editing Prometheus configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(d) Add user-defined Exporter scrape job (for Windows) (optional)

To scrape user-defined Exporter, you need the following setup:

  • Add for user-specific discovery configuration file

  • Editing Prometheus configuration file (jpc_prometheus_server.yml)

For details about how to Add and edit File each definition files, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  1. Add user-specific discovery configuration file.

    Specify user-defined Exporter that you want to scrape to user-specific discovery configuration file.

    For descriptions, see User-specific discovery configuration file (user_file_sd_config_any-name.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  2. Add a scrape job to Prometheus configuration file.

    In Prometheus configuration file (jpc_prometheus_server.yml), add the scrape job to scrape user-defined Exporter.

    Scrape jobs are listed in scrape_configs.

    <Sample Setup>

    scrape_configs:
      - job_name: scrape-job-name
     
        file_sd_configs:
          - files:
            - discovery-configuration-file-name
     
        relabel_configs:
          - target_label: jp1_pc_nodelabel
            replacement: label-name-of-IM-management-node
     
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: 'metric-1|metric-2|metric-3'
            action: 'keep'
    scrape-job-name

    Specify an arbitrary string. This value is setup on job label of metric.

    discovery-configuration-file-name

    Specify file of user-specific discovery configuration file created in step 1 above.

    label-name-of-IM-management-node

    Specify the character string that integrated operation viewer displays on IM management node label. This is not a control character.

    When URL is encoded, the character string must be within 234 bytes (the upper limit for multibyte characters is 26). The label-name that overlap with label-name of the IM management nodes created by JP1/IM - Agent cannot be specified. Specify the character string that is not already specified in same host. If the character string that is already specified, the configuration information SIDs become same, IM management nodes are not created properly.

    metric-1, metric-2, metric-3

    Specify metric that you want to collect. If there is more than one metric to be collected, separate them with |.

    If you want to collect all metrics, you do not need to include "metric_relabel_configs". However, if a large amount of metric is present, the amount of data will be large. Therefore, we recommend that you list "metric_relabel_configs" and limit it to metric to be monitored.

  3. Add metric definition file.

    Add metric definition file for user-defined Exporter.

    For descriptions, see User-specific metric definition file (metrics_any-Prometheus-trend-name.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(e) Changing Remote Write destination (for Windows) (optional)

Specifies URL and ports of imagentproxy processes running on the same host in the remote_write.url of Prometheus configuration file (jpc_prometheus_server.yml) for Remote Write destination. You need to change it only if you want to change imagentproxy process port.

<Sample Setup>

remote_write:
  - url: http://localhost:20727/ima/api/v1/proxy/service/promscale/write

For instructions on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(f) Configuring service monitoring settings (For Windows) (optional)

To use the service monitoring function in an environment where the version is upgraded and installed from JP1/IM - Agent 13-00 to 13-01 or later, configure the following settings. This setting is not required if JP1/IM - Agent 13-01 or later is newly installed.

■ Editing Prometheus configuration file (jpc_prometheus_server.yml)

If "windows_service_state" is not set to keep metric in the metric_relabel_configs settings of the jpc_windows scrape job, add the settings. Also, if the same metric_relabel_configs setting does not set a relabel config for "windows_service_.*" metric, add the setting. Add the underlined settings as follows:

(Omitted)
scrape_configs:
(Omitted)
  - job_name: 'jpc_windows'
(Omitted)
    metric_relabel_configs:
      - source_labels: ['__name__']
        regex: 'windows_cs_physical_memory_bytes|windows_cache_copy_read_hits_total|(Omitted)|windows_process_working_set_peak_bytes|windows_process_working_set_bytes|windows_service_state'
        action: 'keep'
      - source_labels: ['__name__']
        regex: 'windows_process_.*'
        target_label: 'jp1_pc_trendname'
        replacement: 'windows_exporter_process'
      - source_labels: ['__name__','process']
        regex: 'windows_process_.*;(.*)'
        target_label: 'jp1_pc_nodelabel'
        replacement: ${1}
      - source_labels: ['__name__']
        regex: 'windows_service_.*'
        target_label: 'jp1_pc_trendname'
        replacement: 'windows_exporter_service'
      - source_labels: ['__name__']
        regex: 'windows_service_.*'
        target_label: 'jp1_pc_category'
        replacement: 'service'
      - source_labels: ['__name__','name']
        regex: 'windows_service_.*;(.*)'
        target_label: 'jp1_pc_nodelabel'
        replacement: ${1}
      - regex: jp1_pc_multiple_node
        action: labeldrop

■ Editing Windows exporter discovery configuration file (jpc_file_sd_config_windows.yml)

If "windows_service_state" is not set in the jp1_pc_multiple_node settings, add the underlined settings as shown below.

- targets:
  - host name:20717
  labels:
    jp1_pc_exporter: JPC Windows exporter
    jp1_pc_category: platform
    jp1_pc_trendname: windows_exporter
    jp1_pc_multiple_node: "{__name__=~'windows_process_.*|windows_service_.*'}"

(g) Configure the settings when the label name (jp1_pc_nodelabel value) of the IM management node exceeds the upper limit (for Windows) (optional)

IM management node label-name (jp1_pc_nodelabel value) can be up to 234 bytes of URL encoded text (If all are multibyte characters, the limit is 26 characters). If the limit is exceeded, you must change the jp1_pc_nodelabel value in the metric_relabel_configs of the Prometheus configuration file (jpc_prometheus_server.yml). If you do not change the value, IM management node with that value is not created when you create IM management node.

If you want to change the value, add the following underlined settings to the metric_relabel_configs settings for Prometheus configuration file (jpc_prometheus_server.yml) scrape job: When changing the value of multiple targets, add the setting only for the number of monitored targets.

■ Editing Prometheus configuration file (jpc_prometheus_server.yml)

(Omitted)
scrape_configs:
(Omitted)
  - job_name: 'scrape-job-name'
(Omitted)
    metric_relabel_configs:
(Omitted)
      - source_labels: ['jp1_pc_nodelabel']
        regex: 'regular-expression-to-match-the-value-before-the-jp1_pc_nodelabel-change'
        target_label: 'jp1_pc_nodelabel'
        replacement: 'value-after-jp1_pc_nodelabel-change'

(h) Setting for executing the SAP system log extract command using Script exporter (for Windows) (optional)

Perform the following steps:

  1. Edit the scrape definition in the Prometheus configuration file.

    To execute SAP system log extract command using the http_sd_config method of Script exporter, change scrape definition of Script exporter as shown below.

    • Editing Prometheus configuration file (jpc_prometheus_server.yml)

    (Omitted)
    scrape_configs:
    (Omitted)
     
      - job_name:  'jpc_script'
     
        http_sd_configs:
          - url: 'http://installation-host-name:port/discovery'
    (Omitted)
     
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: 'script_success|script_duration_seconds|script_exit_code'
            action: 'keep'
          - source_labels: [jp1_pc_script]
            target_label: jp1_pc_nodelabel
          - source_labels: [jp1_pc_script]
            regex: '.*jr3slget.*|.*jr3alget.*'
            target_label: 'jp1_pc_category'
            replacement: 'enterprise'
          - regex: (jp1_pc_script|jp1_pc_multiple_node|jp1_pc_agent_create_flag)
            action: labeldrop

(i) Add a Web exporter scrape job (for Windows) (optional)

  1. Add a default scrape job to Prometheus configuration file.

    If JP1/IM - Agent 13-10 or later is newly installed, it does not need to be executed.

    If you upgrade JP1/IM - Agent from a version earlier than 13-10 to 13-10 or later, Prometheus configuration file model files stored in JP1/IM-Agent-Installation-destination-folder\jp1ima\conf are updated. Add the following content of Prometheus configuration file model file (jpc_prometheus_server.yml.model) to Prometheus configuration file (jpc_prometheus_server.yml) scrape_configs.

    For logical host operation, refresh the shared-folder\jp1ima\conf\jpc_prometheus_server.yml.

    <What to append>

    scrape_configs:
      - job_name: 'jpc_web_probe'
        scrape_interval: 6m
        scrape_timeout: 5m
        metrics_path: /probe
        file_sd_configs:
          - files:
            - 'jpc_file_sd_config_web.yml'
        relabel_configs:
          (Omitted)
        metric_relabel_configs:
          (Omitted)
  2. Add a Web exporter discovery configuration file (optional).

    For Web scenario monitoring, when you add a new scrape job, you create a new discovery configuration file.

    Copy the following Web exporter discovery configuration file model file as a template, rename it to the copy destination definition file name, and create a discovery configuration file.

    Copy source: Agent-path\conf\jpc_file_sd_config_web.yml.model

    Copy destination: Agent-path\conf\user\file_sd_config_web_any-name.yml

    For details about Web exporter discovery configuration file format, see Web exporter discovery configuration file (jpc_file_sd_config_web) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about editing Web exporter discovery configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows) .

  3. Add a new scrape job to Prometheus configuration file (optional).

    If you are monitoring Web scenarios, when you add a new scrape job, in Prometheus configuration file (jpc_prometheus_server.yml), add a new scrape job by copying the scrape job definition with the value of "jpc_web_probe".

    <Setting example>

    scrape_configs:
      - job_name: any-scrape-job-name
        scrape_interval: scrape-request-interval
        scrape_timeout: timeout-period-for-scrape-request
        metrics_path: /probe
        file_sd_configs:
          - files:
            - 'user/discovery-configuration-file-name'
        relabel_configs:
          (Omitted)
        metric_relabel_configs:
          (Omitted)
    any-scrape-job-name

    Specify any name that does not overlap with any other scrape job name, in the range of 1 to 255 characters, except for control characters.

    For details on Prometheus configuration file, see <scrape_config> in Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about editing Prometheus configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(j) Add a VMware exporter scrape job (for Windows only) (optional)

  1. Add a default scrape job to Prometheus configuration file.

    If JP1/IM - Agent 13-10 or later is newly installed, it does not need to be executed.

    If you upgrade JP1/IM - Agent from a version earlier than 13-10 to 13-10 or later, Prometheus configuration file model files stored in JP1/IM-Agent-Installation-destination-folder\jp1ima\conf are updated. Add the following content of Prometheus configuration file model file (jpc_prometheus_server.yml.model) to Prometheus configuration file (jpc_prometheus_server.yml) scrape_configs.

    For logical host operation, refresh the shared-folder\jp1ima\conf\jpc_prometheus_server.yml.

    <What to append>

    scrape_configs:
      - job_name: 'jpc_vmware'
        params:
          section: ['section']
        file_sd_configs:
          - files:
            - 'jpc_file_sd_config_vmware.yml'
        relabel_configs:
          (Omitted)
        metric_relabel_configs:
          - source_labels: [vm_name]
            regex: '(.*)'
            target_label: instance
            action: replace
          - source_labels: ['ds_name']
            regex: '(.*)'
            target_label: instance
            replacement: ${1}
          - source_labels: ['ds_name']
            regex: '(.*)_(.*)'
            target_label: instance
            replacement: ${1}
          - source_labels: ['vm_name','__name__']
            regex: '(.*);vmware_vm_.*'
            target_label: instance
            replacement: ${1}
            action: replace
          - source_labels: ['host_name','__name__']
            regex: '(.*);vmware_host_.*'
            target_label: instance
            replacement: ${1}
            action: replace
          - source_labels: ['__name__']
            regex: 'vmware_host_.*|vmware_datastore_.*'
            target_label: jp1_pc_trendname
            replacement: vmware_exporter_host
          - source_labels: ['__name__']
            regex: 'vmware_vm_.*'
            target_label: jp1_pc_trendname
            replacement: vmware_exporter_vm
          (Omitted)
  2. Add a scrape job to Prometheus configuration file.

    Prior to adding a VMware exporter scrape job, you must add a section to configuration file of VMware exporter. For details, see 1.21.2(14)(b) Add, change, or remove sections (for Windows) (mandatory).

    After adding the section, perform the following steps to set up a scrape job to scrape the newly created section.

    If you want to monitor VMware ESXi, in Prometheus configuration file (jpc_prometheus_server.yml), add a new scrape job by copying the scrape job definition with the job name "jpc_vmware".

    Also, if you want to monitor the new section additionally, in Prometheus configuration file (jpc_prometheus_server.yml), add a new scrape job by copying the scrape job definition with the job name "jpc_vmware". In this scenario, the section name in params is the section name set by VMware exporter configuration file (jpc_vmware_exporter.yml).

    <Setting example>

    scrape_configs:
      - job_name: any-scrape-job-name
        params:
          section: ['section-name']
        file_sd_configs:
          - files:
            - 'jpc_file_sd_config_vmware.yml'
        relabel_configs:
          (Omitted)
        metric_relabel_configs:
          - source_labels: [vm_name]
            regex: '(.*)'
            target_label: instance
            action: replace
          - source_labels: ['ds_name']
            regex: '(.*)'
            target_label: instance
            replacement: ${1}
          - source_labels: ['ds_name']
            regex: '(.*)_(.*)'
            target_label: instance
            replacement: ${1}
          - source_labels: ['vm_name','__name__']
            regex: '(.*);vmware_vm_.*'
            target_label: instance
            replacement: ${1}
            action: replace
          - source_labels: ['host_name','__name__']
            regex: '(.*);vmware_host_.*'
            target_label: instance
            replacement: ${1}
            action: replace
          - source_labels: ['__name__']
            regex: 'vmware_host_.*|vmware_datastore_.*'
            target_label: jp1_pc_trendname
            replacement: vmware_exporter_host
          - source_labels: ['__name__']
            regex: 'vmware_vm_.*'
            target_label: jp1_pc_trendname
            replacement: vmware_exporter_vm
          (Omitted)
    any-scrape-job-name

    Add an underscore (_) without changing the first "jpc_vmware", followed by any name that does not overlap with any other scrape job name, ranging from 1 to 255 characters, except for control characters.

    section-name

    Specify the section name added in 1.21.2(14)(b) Add, change, or remove sections (for Windows) (mandatory).

    Enclose the section name in single quotation marks (') or double quotation marks (").

    For details on Prometheus configuration file, see <scrape_config> in Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about editing Prometheus configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(k) Add a Windows exporter (Hyper-V Monitor scrape job (for Windows only) (optional)

  1. Add a default scrape job to Prometheus configuration file.

    If JP1/IM - Agent 13-11 or later is newly installed, it does not need to be executed.

    If you upgrade JP1/IM - Agent from a version earlier than 13-11 to 13-11 or later, Prometheus configuration file model files stored in JP1/IM-Agent-Installation-destination-folder\jp1ima\conf are updated. Add the following content of Prometheus configuration file model file (jpc_prometheus_server) (.yml.model) to Prometheus configuration file (jpc_prometheus_server) scrape_configs.

    For logical hosts, refresh shared-folder\jp1ima\conf\jpc_prometheus_server.yml.

    <What to append>

    scrape_configs:
      - job_name: 'jpc_hyperv'
        
        file_sd_configs:
          - files:
            - 'jpc_file_sd_config_windows_hyperv.yml'
            
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: 'windows_hyperv_vm_cpu_total_run_time|windows_hyperv_vm_device_bytes_written|windows_hyperv_vm_device_bytes_read|windows_hyperv_host_cpu_total_run_time|windows_hyperv_vswitch_bytes_received_total|windows_hyperv_vswitch_bytes_sent_total|windows_cs_logical_processors|windows_hyperv_vm_cpu_hypervisor_run_time'
            action: 'keep'
          - source_labels: ['__name__']
            regex: 'windows_hyperv_host_.*|windows_hyperv_vswitch_.*|windows_hyperv_vm_cpu_.*|windows_cs_.*'
            target_label: jp1_pc_nodelabel
            replacement: Hyperv metric collector(Hypervisor)
          - source_labels: ['__name__','instance']
            regex: 'windows_hyperv_vm_device_.*;([^:]+):?.*'
            target_label: jp1_pc_nodelabel
            replacement: 'Hyperv metric collector(VM) Hypervisor:${1}'
          - source_labels: ['__name__','instance']
            regex: 'windows_hyperv_vm_cpu_total_run_time;([^:]+):?.*'
            target_label: jp1_pc_nodelabel
            replacement: 'Hyperv metric collector(VM) Hypervisor:${1}'
          - source_labels: ['vm','__name__']
            regex: '(.*);windows_hyperv_vm_cpu_total_run_time'
            target_label: instance
            replacement: ${1}
            action: replace
          - source_labels: ['vm_device']
            regex: '^(.*-)([^-_]*)(?:_[^.]*)?\.[^.]*$'
            target_label: instance
            replacement: ${2}
            action: replace
          - source_labels: ['__name__']
            regex: 'windows_hyperv_host_.*|windows_hyperv_vswitch_.*|windows_hyperv_vm_cpu_hypervisor_.*|windows_cs_.*'
            target_label: jp1_pc_trendname
            replacement: windows_exporter_hyperv_host
          - source_labels: ['__name__']
            regex: 'windows_hyperv_host_.*|windows_hyperv_vswitch_.*|windows_hyperv_vm_cpu_hypervisor_.*|windows_cs_.*'
            target_label: jp1_pc_category
            replacement: VirtualMachine Host
          - source_labels: ['__name__']
            regex: 'windows_hyperv_vm_device_.*|windows_hyperv_vm_cpu_total_run_time'
            target_label: jp1_pc_trendname
            replacement: windows_exporter_hyperv_vm
          - source_labels: ['__name__']
            regex: 'windows_hyperv_vm_device_.*|windows_hyperv_vm_cpu_total_run_time'
            target_label: 'jp1_pc_remote_monitor_instance'
            replacement: 'installation-host-name:Hyperv metric collector(VM)'
            action: 'replace'

(4) Setup of Alertmanager

(a) Changing Ports (For Windows) (optional)

The listen port used by Alertmanager is specified in --web.listen-address option of alertmanager command.

For details about how to change alertmanager command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --web.listen-address option, see alertmanager command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20714". If port number is changed, review setup of the firewall and prohibit accessing from outside. However, if you want to monitor Alertmanager with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In such cases, consider security measures such as limiting the source IP address as required.

(b) Changing the alert notification destination (for Windows) (optional)

To specify Alert destinations, write the URL and port of imagentproxy processes running on the same host to recieivers.webhook_config.url in Alertmanager configuration file (jpc_alertmanager.yml). You need to change it only if you want to change imagentproxy process port.

For instructions on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(c) Setup silence (for Windows) (optional)

Execute the command from JP1/IM - Manager to the host where Alertmanager whose silence you want to setup is running. Use curl command to call REST API that setup silence.

For REST API on how to setup silence, see 5.22.4 Silence creation of Alertmanager in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

Settings in silence to be specified in message body of the request is passed in curl command argument.

(5) Setup of Windows exporter

(a) Change Port (optional)

The listen port used by Windows exporter is specified in --telemetry.addr option of the windows_exporter command.

For details about how to change windows_exporter command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --telemetry.addr option, see windows_exporter command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20717". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Modify metric to Collect (optional)

  1. Add metric to Prometheus configuration file.

    In the metric_relabel_configs of Prometheus configuration file (jpc_prometheus_server.yml), metric to be collected are defined separated by "|". Delete metric that you do not need to collect and Add metric that you want to collect.

    For instructions on updating configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    <Sample Setup>

      - job_name: 'jpc_windows'
        :
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: 'windows_cache_copy_read_hits_total| Windows_cache_copy_reads_total| Windows_cpu_time_total| Windows_logical_disk_free_bytes| Windows_logical_disk_idle_seconds_total| Windows_logical_disk_read_bytes_total|....|windows_net_packets_sent_total| Windows_net_packets_received_total| Windows_system_context_switches_total| Windows_system_processor_queue_length| Windows_system_system_calls_total [Add metric here]'
  2. If required, define a trend view in metric Definition file.

    In Windows exporter and Windows exporter (process monitoring) metric definition files, you define a trend view.

    For descriptions, see Windows exporter metric definition file (metrics_windows_exporter.conf) and Windows exporter (process monitoring) metric definition file (metrics_windows_exporter_process.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  3. Configure the service monitor settings.

    - Editing Windows exporter configuration file (jpc_windows_exporter.yml)

    When performing service monitoring, Windows exporter configuration file (jpc_windows_exporter.yml) is edited as shown below.

    collectors:
      enabled: cache,cpu,logical_disk,memory,net,system,cs,process,service
    collector:
      logical_disk:
        volume-whitelist: ".+"
        volume-blacklist: ""
      net:
        nic-whitelist: ".+"
        nic-blacklist: ""
      process:
        whitelist: ""
        blacklist: ""
      service:
        services-where: "WQL's-Where-phrase"
    scrape:
      timeout-margin: 0.5

    If "service" is not set for "enabled" of "collectors", add the "service" setting.

    If "service" of "collector" is not set, add "service" and "services-where" lines. The value of "services-where" is Where phrase of WQL and sets the service name of the service to be monitored in the format "Name='service-name'". If the service name is set to exact match and you want to monitor more than one service, connect them with a OR and set them in the format "Name='service-name' OR Name='service-name' OR ...".

    - Sample definitions of Windows exporter configuration file (jpc_windows_exporter.yml)

    The following is a sample definition for monitoring jpc_imagent and jpc_imagentproxy servicing:

    collectors:
      enabled: cache,cpu,logical_disk,memory,net,system,cs,process,service
    collector:
      logical_disk:
        volume-whitelist: ".+"
        volume-blacklist: ""
      net:
        nic-whitelist: ".+"
        nic-blacklist: ""
      process:
        whitelist: ""
        blacklist: ""
      service:
        services-where: "Name='jpc_imagent' OR Name='jpc_imagentproxy'"
    scrape:
      timeout-margin: 0.5

(c) Specifying monitored processes (required)

- Edit the Windows exporter configuration file (jpc_windows_exporter.yml)

Edit the Windows exporter configuration file (jpc_windows_exporter.yml) to define which processes are to be monitored.

By default, no process is to be monitored, and therefore you will specify the processes you want to monitor in the Windows exporter configuration file.

For details on the Windows exporter configuration file, see Windows exporter configuration file (jpc_windows_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(6) Setup of Blackbox exporter

(a) Changing Ports (For Windows) (optional)

The listen port used by Blackbox exporter is specified in --web.listen-address option of the blackbox_exporter command.

For details about how to change blackbox_exporter command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --web.listen-address option, see blackbox_exporter command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20715". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Add, change, and delete modules (for Windows) (optional)

For each target (host or URL), it must be defined monitoring methods, such as protocols and authentication data in Blackbox exporter.

The following modules are defined in the default setup:

Table 1‒14: Modules Defined in the Initial Setup

Module name

Feature

http

  • Monitor http/https.

  • The method is "GET" and the headers are not setup.

  • Client authentication, Server authentication, and HTTP authentication (Basic authentication) are not performed.

  • When http/https's URL is accessed and a status code in 200-299 is returned, 1 is setup to the probe_success (metric).

  • If communication to URL is not possible or if the status code is not in 200-299, 0 is setup to metric.

  • If the target is redirected, it depends on the status code of the redirected target.

icmp

  • Monitor icmp.

  • Authentication is not performed.

  • If icmp communication can be performed for the host or IP address to be monitored, 1 is setup to metric. If communication is not possible, 0 is setup.

If monitoring is possible with the module of the default setup, there is no need to define a new one. If there are requirements that cannot be monitored by the module in the initial setup, as shown below, the module definition must be added.

  • When authentication is required

  • To change the judgment based on the content of the response

Modules are defined in Blackbox exporter configuration file. For descriptions, see Blackbox exporter configuration file (jpc_blackbox_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The following shows description rules:

  • Setup the new module-name as follows:

    • When performing HTTP/HTTPS monitoring

      Setup the name starting with http.

    • For ICMP monitoring

      Setup the name starting with icmp.

  • If you are creating a client-side authentication, a Server authentication, or a HTTP authentication (Basic authentication) module, you will need a certificate and a setup of password.

    For the location of the certificate, see the list of files/directories in the Appendix A.4 JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    For details about HTTP authentication (Basic authentication) Password's Setting method, see 1.21.2(6)(e) Setup the proxy authentication ID and Password (for Windows) (optional) and 1.21.2(6)(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (for Windows) (optional).

    Table 1‒15: Monitoring requirements and required setup

    Monitoring conditions

    Required File

    Required Setup

    Server authentication

    Place CA certificate of authentication station that issued the server certificate of the target in Agent-path\conf\user/cert.

    Setup the contents below to tls_config of Blackbox exporter configuration file.

    • Setup ca_file for CA certificate path

    • Setup false to the insecure_skip_verify

    No server authentication

    None.

    Setup the contents below to the tls_config of Blackbox exporter configuration file.

    • Setup true to insecure_skip_verify

    Client authentication

    • Place the client certificate in Agent-path\conf\cert.

    • Place the client certificate key File in Agent-path\conf\user\secret.

    Setup the contents below to tls_config of Blackbox exporter configuration file.

    • Setup the client certificate path to cert_file

    • Setup the client certificate key File to key_file

    No client authentication

    None.

    None.

    Basic authentication

    None.

    Setup the contents below to basic_auth of Blackbox exporter configuration file.

    • Setup User name used for Basic authentication in username

    For details about Basic authentication's Password's Setup, see 1.21.2(6)(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (for Windows) (optional).

For instructions on updating Blackbox exporter configuration file and deploying the certificate File, see 1.21.2(1)(a) Edit the configuration files (for Windows).

If it needs to from Blackbox exporter to access the monitored Web Server through a proxy server, the proxy server's setup is required. See 1.21.2(6)(d) Monitoring HTTP through proxy (for Windows) (optional)".

If you Add the module definition, you will need to define a scrape job to scrape with the newly created module from Prometheus server. For details about setup on Prometheus server, see 1.21.2(3)(c) Add Blackbox exporter scrape job (for Windows) (optional).

(c) Add, change, or Delete the monitoring target (for Windows) (required)

Monitoring targets of Blackbox exporter are listed in definition file in the following tables.

After you Add the targets, you must refresh IM management node tree. For details, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

  • Blackbox exporter (HTTP/HTTPS monitoring) discovery configuration file

    Item

    Description

    File Name

    • jpc_file_sd_config_blackbox_http.yml

    • file_sd_config_blackbox_module-name-begins-with-http.yml

    Setup target

    Define the monitoring target of HTTP/HTTPS.

    Format

    See Blackbox exporter (HTTP/HTTPS monitoring) discovery configuration file (jpc_file_sd_config_blackbox_http.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    Update procedure

    See 1.21.2(1)(a) Edit the configuration files (for Windows).

  • Blackbox exporter (ICMP monitoring) discovery configuration file

    Item

    Description

    File Name

    • jpc_file_sd_config_blackbox_icmp.yml

    • file_sd_config_blackbox_module-name-begins-with-icmp.yml

    Setup target

    Define the monitoring target of ICMP.

    Format

    See Blackbox exporter (ICMP monitoring) discovery configuration file (jpc_file_sd_config_blackbox_icmp.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    Update procedure

    See 1.21.2(1)(a) Edit the configuration files (for Windows).

(d) Monitoring HTTP through proxy (for Windows) (optional)

Setup "proxy_url" to Blackbox exporter configuration file (jpc_blackbox_exporter.yml).

For details about the Blackbox exporter configuration file, see Blackbox exporter configuration file (jpc_blackbox_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details on updating Blackbox exporter configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

Note that authentication ID and Password must be setup when authentication is routed through the required proxies. For setting method of authentication ID and Password, see 1.21.2(6)(e) Setup the proxy authentication ID and Password (for Windows) (optional).

When you skip DNS resolution and change of URL performed by Blackbox exporter, it is necessary to set skip_resolve_phase_with_proxy true in Blackbox exporter configuration file (jpc_blackbox_exporter.yml). For details and the example of case that setting of skip_resolve_phase_with_proxy is necessary, see Blackbox exporter configuration file (jpc_blackbox_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(e) Setup the proxy authentication ID and Password (for Windows) (optional)

When performing HTTP/HTTPS monitoring, if there is a proxy server that requires a Basic authentication between Blackbox exporter and the monitored Web Server, authentication ID and Password must be setup.

Authentication ID is specified in "modules. module-name. http.proxy_user" of Blackbox exporter configuration file (jpc_blackbox_exporter.yml). For details about Setting method of Blackbox exporter configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

Set Password in the following ways: For details, refer to the explanation for each item.

  • Secret management command

  • List of Secrets dialog box of integrated operation viewer

  • REST API of Secret Management of integrated operation viewer

(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (for Windows) (optional)

When you perform HTTP/HTTPS monitoring, you must setup authentication ID, Password, and Bearer tokens if Basic authentication is required for accessing the monitored Web Server.

Authentication ID is specified in "modules. module-name. http.basic_auth".username" of Blackbox exporter configuration file (jpc_blackbox_exporter.yml). For details about setting method of Blackbox exporter configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

Password and Bearer tokens are setup in the following ways: For details, refer to the explanation for each item.

  • Secret management command

  • List of Secrets dialog box of integrated operation viewer

  • REST API of Secret Management of integrated operation viewer

(7) Setup in Yet another cloudwatch exporter

(a) Changing Ports (For Windows) (optional)

The listen port used by Yet another cloudwatch exporter is specified in -listen-address option of yet-another-cloudwatch-exporter command.

For details about how to change the options of yet-another-cloudwatch-exporter command, see 1.21.2(1)(c) Change command-line options (for Windows). For details of -listen-address option, see yet-another-cloudwatch-exporter command options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager, Command Definition File and API Reference.

The default port is "20718". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Modify Setup to connect to CloudWatch (for Windows) (required)

There are two ways to connect to CloudWatch from Yet another cloudwatch exporter: using an access key (hereinafter referred to as the access key method) and using an IAM role (hereinafter referred to as the IAM role method). If you install Yet another cloudwatch exporter on a host other than AWS/EC2, you can only use the access key method. If you are installing Yet another cloudwatch exporter on AWS/EC2, you can use the access key method or the IAM role method.

The procedure for connecting to CloudWatch is described in the following four patterns.

  • Access Key Method (Part 1)

    Connect to CloudWatch as an IAM user in your AWS account

  • Access Key Method (Part 2)

    Create multiple IAM users in your AWS account with the same role, and connect to CloudWatch with IAM users in this role

  • IAM Role Method (Part 1)

    Connect to CloudWatch with an AWS account for which you have configured an IAM role

  • IAM Role Method (Part 2)

    Connect to CloudWatch with multiple AWS accounts with the same IAM role

- When connecting to CloudWatch with access method (part 1)

  1. Create an IAM policy "yace_policy" in your AWS account (1) and set the following JSON format information.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "CloudWatchExporterPolicy",
                "Effect": "Allow",
                "Action": [
                    "tag:GetResources",
                    "cloudwatch:ListTagsForResource",
                    "cloudwatch:GetMetricData",
                    "cloudwatch:ListMetrics"
                ],
                "Resource": "*"
            }
        ]
    }
  2. Create an IAM group "yace_group" in the AWS account (1) and assign the IAM policy "yace_policy" created in step 1.

  3. Create IAM user "yace_user" in AWS account (1) and belong to the IAM group "yace_group" created in step 2.

  4. On the host of the monitoring module, create a credentials file in the "/root/.aws/" directory, and set the access key and secret access key of the IAM user "yace_user" created in step 3 in the [default] section of the credentials file.

- When connecting to CloudWatch with access method (part 2)

  1. Create IAM policy "yace_policy" in AWS account (2) and set the same JSON format information as in step 1 of Access method (Part 1).

  2. Create the IAM role "cross_access_role" in AWS account (2), select "Another AWS account" for [Select trusted entity type], and specify the account ID of AWS account (1) as the account ID.

  3. Assign the IAM policy "yace_policy" created in step 1 to the IAM role "cross_access_role" created in step 2.

  4. Create IAM policy "yace_policy" in AWS account (1) and set the same JSON format information as in step 1 of Access method (Part 1).

  5. Create IAM policy "account2_yace_policy" in AWS account (1) and set the following JSON format information.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "sts:AssumeRole",
                "Resource": "arn:aws:iam::AWS-account(2):role/cross_access_role"
            }
        ]
    }

    The underlined "cross_access_role" is the name of IAM role created in step 2.

  6. Create an IAM group "yace_group" in your AWS account (1), and assign the IAM policy "yace_policy" created in step 1 and the IAM policy "account2_yace_policy" created in step 5.

  7. Create IAM user "yace_user" in AWS account (1) and belong to the IAM group "yace_group" created in step 6.

  8. On the host of the monitoring module, create a credentials file in the "/root/.aws/" directory, and set the access key and secret access key of the IAM user "yace_user" created in step 7 in the [default] section of the credentials file.

  9. Add the following definition# of AWS account (2) to the Yet another cloudwatch exporter configuration file (ya_cloudwatch_exporter.yml).

    discovery:
      exportedTagsOnMetrics:
        AWS/S3:
          - jp1_pc_nodelabel
      jobs:
      - type: AWS/S3
        regions:
          - us-east-2
        metrics:
          - name: BucketSizeBytes
            statistics:
            - Sum
            period: 300000
            length: 400000
            nilToZero: true
     
      - type: AWS/S3
        regions:
          - us-east-2
        roles:
          - roleArn: "arn:aws:iam::AWS-account(2):role/cross_access_role"
        metrics:
          - name: BucketSizeBytes
            statistics:
            - Sum
            period: 300000
            length: 400000
            nilToZero: true
    #

    Lines 1 to15 show the collection settings of AWS account (1), and lines 17 and later show the collection settings of AWS account (2).

    In the collection settings of AWS account (2), "roles.roleArn" must be specified. You can specify up to two AWS accounts for "roles.roleArn", but if you want to specify two or more accounts, please contact Hitachi Sales.

- When connecting to CloudWatch using the IAM role method (Part 1)

  1. Create IAM policy "yace_policy" in AWS account (1) and set the same JSON format information as in step 1 of Access method (Part 1).

  2. Create an IAM role "yace_role" in your AWS account (1), and select AWS service for [Select trusted entity type] and EC2 for [Select use case].

  3. Assign the IAM policy "yace_policy" created in step 1 to the IAM role "yace_role" created in step 2.

  4. Assign the IAM role "yace_role" created in steps 2~3 to the EC2 instance where the monitoring module of AWS account (1) is installed#.

    #

    Open the EC screen of the AWS console and execute it in the menu of [Action] - [Security] - [Change IAM Role].

- When connecting to CloudWatch using the IAM role method (part 2)

  1. Create IAM policy "yace_policy" in AWS account (2) and set the same JSON format information as in step 1 of Access method (Part 1).

  2. Create the IAM role "cross_access_role" in AWS account (2), select "Another AWS account" for [Select trusted entity type], and specify the account ID of AWS account (1) as the account ID. Also, specify an external ID if necessary.

  3. Create IAM policy "account2_yace_policy" in AWS account (1) and set the following JSON format information.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "sts:AssumeRole",
                "Resource": "arn:aws:iam::AWS-account(2):role/cross_access_role"
            }
        ]
    }

    The underlined "cross_access_role" is the name of IAM role created in step 2.

  4. Create an IAM role "yace_role" in your AWS account (1), and select AWS service for [Select trusted entity type] and EC2 for [Select use case].

  5. Assign the IAM policy "account2_yace_policy" created in step 3 to the IAM role "yace_role" created in step 4.

  6. Assign the IAM role "yace_role" created in step 4 to the EC2 instance where the monitoring module of the AWS account (1) is installed.#

    #

    Open the EC screen of the AWS console and execute it in the menu of [Action] - [Security] - [Change IAM Role].

  7. Add the following definition# of AWS account (2) to the Yet another cloudwatch exporter configuration file (ya_cloudwatch_exporter.yml).

    discovery:
      exportedTagsOnMetrics:
        AWS/S3:
          - jp1_pc_nodelabel
      jobs:
      - type: AWS/S3
        regions:
          - us-east-2
        roles:
          - roleArn: "arn:aws:iam::AWS-account(2):role/cross_access_role"
            externalId: "External-ID"
        metrics:
          - name: BucketSizeBytes
            statistics:
            - Sum
            period: 300000
            length: 400000
            nilToZero: true
    #

    Lines 9~11 show the collection settings for AWS account (2).

    In the collection settings of AWS account (2), "roles.roleArn" must be specified. You can specify up to two AWS accounts for "roles.roleArn", but if you want to specify two or more accounts, please contact Hitachi Sales.

    Specify "externalId" in the collection settings of your AWS account (2) only if you specified an external ID in step 2.

(c) Connect to CloudWatch through a proxy (for Windows) (optional)

If you need to connect to CloudWatch through a proxy, use the environment variable HTTPS_PROXY (the environment variable HTTP_PROXY is not available).

The format of value specified in the environment-variable HTTPS_PROXY is shown below.

http://proxy-user-name:password@proxy-server-host-name:port-number
Important

Note that value begins with "http://" in HTTPS_PROXY of the environment variable-name.

■ For Windows

  1. Stop Yet another cloudwatch exporter.

  2. Open the System Properties dialog from [Setup] - [System] - [Detailed Information] - [Related settings] - [System Detail settings].

  3. Click the [Environment Variable] to display the Environment Variables dialog box.

  4. Setup the system environment as follows.

    Variable Name

    Value

    HTTPS_PROXY

    http://proxy-user-name:password@proxy-server-host-name:port-number

  5. Start Yet another cloudwatch exporter.

Important
  • Because the environment variable HTTPS_PROXY is setup to the system environment variable, it is reflected in all processes running on that host.

  • It is important to note that system environment variables can be displayed by anyone who can Login them. When Password is specified in the environment-variable HTTPS_PROXY, measures such as limiting the number of users who can login the system are required.

■ For Linux

  1. Stop Yet another cloudwatch exporter.

  2. Create any File and describe it as follows:

    HTTPS_PROXY=http://proxy-user-name:Password@proxy-server-host-name:port-number

    For details of what to write, execute man systemd.exec and check value that has been Setup to "EnvironmentFile=".

  3. Add EnvironmentFile to unit definition file and write file path created in step 2.

      :
    [Service]
    EnvironmentFile = "path-of-file-created-in-step-2"
    WorkingDirectory = ....
    ExecStart = ....
      :
  4. Refresh systemd.

    Execute the following command:

    systemctl daemon-reload
  5. Start Yet another cloudwatch exporter.

(d) Add AWS Services to be Monitored (optional)

The following AWS services are monitored by default: If you want to monitor other AWS services, follow the steps here.

  • AWS/EC2

  • AWS/Lambda

  • AWS/S3

  • AWS/DynamoDB

  • AWS/States

  • AWS/SQS

  • AWS/EBS

  • AWS/ECS

  • AWS/EFS

  • AWS/FSx

  • AWS/RDS

  • AWS/SNS

  • ECS/ContainerInsights

  1. Add AWS service definition in Yet another cloudwatch exporter configuration file.

    For details about editing, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    Add AWS service definition to the underlined sections below.

    • discovery.exportedTagsOnMetrics

    - Description

    discovery:
      exportedTagsOnMetrics:
        AWS-service-name:
          - jp1_pc_nodelabel

    - Sample Setup

    discovery:
      exportedTagsOnMetrics:
        AWS/EC2:
          - jp1_pc_nodelabel
    • discovery.jobs

    - Description

    discovery:
      : 
      jobs:
      - type: AWS-service-name
        regions:
          - AWS-region
        period: 0
        length: 600
        delay: 120
        metrics:

    - Sample Setup

    discovery:
      : 
      jobs:
      - type: AWS/EC2
        regions:
          - ap-northeast-1
        period: 0
        length: 600
        delay: 120
        metrics:
  2. Add metric you want to collect.

    See 1.21.2(7)(f) Modify metric to Collect (optional).

(e) Monitoring AWS Resources (required)

For AWS resource to be monitored by Yet another cloudwatch exporter, the jp1_pc_nodelabel tag must be setup to AWS resource that you want to monitor. See AWS documentation for how to set the tags for AWS resource.

For jp1_pc_nodelabel tag, setup the following value: Specify an alphanumeric character or hyphen within the range of 1 to 255 characters.

  • For EC2

    Specify the host name.

  • Other than EC2

    Specifies the text that is labeled in IM management node.

Important
  • Setup a string that is unique within AWS services. You can setup the same string for different services - for example, EC2 and Lambda.

  • Accounts with different Yet another cloudwatch exporter monitoring destinations must be different string. Even in different regions, for the same service, use different strings.

  • If a string is duplicated, only one IM management node is created.

The value set in jp1_pc_nodelabel tags is added as the value of jp1_pc_nodelabel label of samples collected by Yet another cloudwatch exporter.

(f) Modify metric to Collect (optional)

  1. Verify metric collected on CloudWatch.

    Verify that metric that you want to collect is collected on CloudWatch.

    In addition, you must have verified setup for CloudWatch metric name and CloudWatch statistic types in preparation for setup in the following steps.

    For details about CloudWatch metric name and CloudWatch statistical types, see "Amazon CloudWatch User Guide" in AWS documentation.

  2. Add definition of CloudWatch metric to Yet another cloudwatch exporter configuration file.

    The underlined sections of discovery.jobs.metrics below describe CloudWatch metric definitions.

    discovery:
       : 
      jobs:
      - type: AWS-service-name
        regions:
          - AWS-region
        period: 0
        length: 600
        delay: 120
        metrics:
          - name: CloudWatch-metric-name-1#1
            statistics:
            - CloudWatch-statistic-types#2
          - name: CloudWatch-metric-name-2#3
            statistics:
            - CloudWatch-statistic-types#4
          - name: CloudWatch-metric-name-3#5
            statistics:
            - CloudWatch-statistic-types#6
            :

    #1 Example of 1 CloudWatch metric name1: CPUUtilization

    #2 Sample 2 CloudWatch statistical types: Average

    #3 Example of 3 CloudWatch metric name2: DiskReadBytes

    #4 Sample 4 CloudWatch statistical types: Sum

    #5 Example of 5 CloudWatch metric name3: DiskWriteBytes

    #6 Sample 6 CloudWatch statistical types: Sum

  3. Add metric to Prometheus configuration file.

    Value of metric_relabel_configs lists metric to collect, separated by |. Add metric that you want to collect. Also, Delete metric that does not need to be collected. For the naming conventions for metric names, see Naming conventions for Exporter metrics in 3.15.1(1)(g) Yet another cloudwatch exporter (Azure Monitor performance data collection capability) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    For details on editing Prometheus configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    <Sample Setup>

      - job_name: 'jpc_cloudwatch'
        : 
        metric_relabel_configs:
          - regex: 'tag_(jp1_pc_.*)'
            replacement: ${1}
            action: labelmap
          - regex: 'tag_(jp1_pc_.*)'
            action: 'labeldrop'
          - source_labels: ['__name__','jp1_pc_nodelabel']
            regex: '(aws_ec2_cpuutilization_average|aws_ec2_disk_read_bytes_sum|aws_ec2_disk_write_bytes_sum|aws_lambda_errors_sum|aws_lambda_duration_average|aws_s3_bucket_size_bytes_sum|aws_s3_5xx_errors_sum|aws_dynamodb_consumed_read_capacity_units_sum|aws_dynamodb_consumed_write_capacity_units_sum|aws_states_execution_time_average|aws_states_executions_failed_sum|aws_sqs_approximate_number_of_messages_delayed_su m| aws_sqs_number_of_messages_deleted_sum [Add metrics here as "|" separated by]);. +$'
            action: 'keep'
  4. If required, define a trend view in metric definition file.

    For descriptions, see Yet another cloudwatch exporter metric definition file (metrics_ya_cloudwatch_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(8) Set up of Promitor

If you use Promitor for monitoring, configure the following settings.

(a) Configuring the settings for establishing a connection to Azure (required)

- Modify the service definition file (for Windows) or the unit definition file (for Linux)

You specify the storage location of the Promitor configuration file with an absolute path in the PROMITOR_CONFIG_FOLDER environment variable. Modify this environment variable, which is found in the service definition file (for Windows) or the unit definition file (for Linux). For details on the service definition file (for Windows) and the unit definition file (for Linux), see the sections describing the applicable files under Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

- Modify the Promitor Scraper runtime configuration file (runtime.yaml)

In the Promitor Scraper runtime configuration file (runtime.yaml), specify the path to the Promitor Scraper configuration file (metrics-declaration.yaml) in metricsConfiguration.absolutePath. For details on the Promitor Scraper runtime configuration file (runtime.yaml), see the section describing the applicable file under Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

- Configure information for connecting to Azure

Configure authentication information used for Promitor to connect to Azure. For details on how to configure it, see 1.21.2(8)(b) Configuring authentication information for connecting to Azure.

(b) Configuring authentication information for connecting to Azure

Promitor can connect to Azure through the service principal method or the managed ID method. Only the service principal method is available when Promitor is installed in hosts other than Azure Virtual Machines. Both the service principal method and the managed ID method are available when Promitor is installed in Azure Virtual Machines.

The following describes three procedures for connecting to Azure.

  • Service principal method

    This uses a client secret to connect to Azure.

  • Managed ID method (system-assigned)

    This uses a system-assigned managed ID to connect to Azure.

  • Managed ID method (user-assigned)

    This uses a user-assigned managed ID to connect to Azure.

- Using the service principal method to connect to Azure

Perform steps 1 to 3 in Azure Portal and then perform steps 4 to 6 on a host where Promitor has been installed.

  1. Create an application and issue a client secret.

  2. Obtain an application (client) ID in Overview for the application.

  3. Select a resource group (or subscription) to be monitored, execute Access control (IAM) - Add role assignment, and specify Monitor Viewer.

  4. Add the client secret value under the Value column issued in step 1 to JP1/IM - Agent.

    Specify the values in the table below for keys used to register the secret.

    Secret registration key

    Value

    Promitor Resource Discovery key

    Promitor.resource_discovery.env.AUTH_APPKEY

    Promitor Scraper key

    Promitor.scraper.env.AUTH_APPKEY

    For details on how to register the secrets, see the description in 3.15.10(2) Adding, changing, or deleting a secret in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    Important

    When building in a container environment, you cannot perform this step before you create a container image. Create a container, and then perform this step.

  5. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify ServicePrincipal for authentication.mode.

  6. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

- Using the managed ID method (system-assigned) to connect to Azure

Perform steps 1 to 3 in Azure Portal and then perform steps 4 to 6 on a host where Promitor has been installed.

  1. In Virtual Machines, select the Azure Virtual Machine where Promitor has been installed.

  2. Go to Identity and then System assigned, and change Status to On.

  3. In Identity - System assigned, under Permissions, select Azure role assignments and specify Monitoring Reader.

  4. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify SystemAssignedManagedIdentity for authentication.mode.

  5. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

- Using the managed ID method (user-assigned) to connect to Azure

Perform steps 1 to 5 in Azure Portal and then perform steps 6 and 7 on a host where Promitor has been installed.

  1. In the service search, select Managed Identities and then Create Managed Identity.

  2. Specify a resource group, name, and other information to create a managed ID.

  3. In Azure role assignments, assign Monitoring Reader.

  4. In Virtual Machines, select the Azure Virtual Machine where Promitor has been installed.

  5. Select Identity, User assigned, and then Add, and add the managed ID you created in step 2.

  6. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify UserAssignedManagedIdentity for authentication.mode.

  7. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

(c) Configuring a proxy-based connection to Azure (optional)

If your connection to Azure must be established via a proxy, use the HTTPS_PROXY environment variable. For details on how to set it up, see 2.19.2(8)(c) Connect to CloudWatch through a proxy (for Linux) (optional) in 2.19 Setup for JP1/IM - Agent (for UNIX). For NO_PROXY, specify the value of resourceDiscovery.host in the Promitor Scraper runtime configuration file (runtime.yaml).

(d) Configuring scraping targets (required)

- Configure monitoring targets that must be specified separately (required)

Monitoring targets can be detected automatically by default; however, some of the services, like the ones described below, must be detected manually. For these services to be detected, edit the Promitor Scraper configuration file (metrics-declaration.yaml) to specify your monitoring targets separately.

  • Services you must specify separately as monitoring targets

    These services are found as the ones with automatic discovery disabled in the table listing the services Promitor can monitor of 3.15.1(1)(h) Promitor (Azure Monitor performance data collection capability) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  • How to specify monitoring targets separately

    Uncomment a monitoring target in the Promitor Scraper configuration file (metrics-declaration.yaml) and add it to the resources section.

- Change monitoring targets (optional)

In Promitor, monitoring targets can be specified in the following two ways:

  • Specifying monitoring targets separately

    If you want to separately specify Azure resources to be monitored, add them to the Promitor Scraper configuration file (metrics-declaration.yaml).

  • Detecting monitoring targets automatically

    If you want to detect resources in your tenant automatically and monitor Azure resources in them, add them to the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml).

- Change monitoring metrics (optional)

To change metrics to be collected or displayed:

  1. Confirm that Azure Monitor has collected the metric.

    Confirm that Azure Monitor has collected the metric you want to collect. As preparation for the settings in the next step, check the metric name and the aggregation type.

    For the metric name, see "Metric" in "Reference > Supported metrics > Resource metrics" in the Azure Monitor documentation. For the aggregation type, see "Aggregation Type" in "Reference > Supported metrics > Resource metrics" in the Azure Monitor documentation.

  2. Edit the settings in the Prometheus configuration file (jpc_prometheus_server.yml).

    If you want to change metrics to be collected, modify the metric_relabel_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

    For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) of JP1/IM - Agent in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  3. Edit the settings in the Promitor metric definition file (metrics_promitor.conf).

    If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Promitor metric definition file (metrics_promitor.conf).

    For details on the Promitor metric definition file, see Promitor metric definition file (metrics_promitor.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(e) Configuring labels for tenant information (optional)

Add labels for the tenant ID and subscription ID of a monitoring target to the property label definition file (property_labels.conf). Otherwise, the tenant and subscription show default in the property of an IM management node and an extended attribute of a JP1 event.

For details on the property label definition file, see Property label definition file (property_labels.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Configuring the system node definition file (imdd_systemnode.conf) (required)

To create a system node described in 3.15.6(1)(i) Tree format in the JP1/Integrated Management 3 - Manager Overview and System Design Guide, edit the system node definition file (imdd_systemnode.conf) to configure the setting items listed in the table below. You can specify any values to the items that are not listed in the table.

Table 1‒16: Settings in the system node definition file (imdd_systemnode.conf)

Item

Value

displayname

Specify the name of the service that publishes metrics for Azure Monitor.

type

It must be specified in uppercase characters as follows:

Azure-Azure-service-name

Azure-service-name is equivalent to one of the names under the Promitor resourceType name column in the table listing the services Promitor can monitor, in 3.15.1(1)(h) Promitor (Azure Monitor performance data collection capability) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

name

This is specified as:

[{".*":"regexp"}]

The following table shows a setting example of the system node definition file when you create system management nodes for Azure services that are found by default as monitoring targets in the Promitor metric definition file (metrics_promitor.conf).

Table 1‒17: Setting example of the system node definition file (imdd_systemnode.conf)

Item

displayName

type

name

Azure Function App

JP1PC-AZURE-FUNCTIONAPP

[{".*":"regexp"}]

Azure Container Instances

JP1PC-AZURE- CONTAINERINSTANCE

[{".*":"regexp"}]

Azure Kubernetes Service

JP1PC-AZURE-KUBERNETESSERVICE

[{".*":"regexp"}]

Azure File Storage

JP1PC-AZURE-FILESTORAGE

[{".*":"regexp"}]

Azure Blob Storage

JP1PC-AZURE-BLOBSTORAGE

[{".*":"regexp"}]

Azure Service Bus Namespace

JP1PC-AZURE-SERVICEBUSNAMESPACE

[{".*":"regexp"}]

Azure Cosmos DB

JP1PC-AZURE-COSMOSDB

[{".*":"regexp"}]

Azure SQL Database

JP1PC-AZURE-SQLDATABASE

[{".*":"regexp"}]

Azure SQL Server

JP1PC-AZURE-SQLSERVER

[{".*":"regexp"}]

Azure SQL Managed Instance

JP1PC-AZURE-SQLMANAGEDINSTANCE

[{".*":"regexp"}]

Azure SQL Elastic Pool

JP1PC-AZURE-SQLELASTICPOOL

[{".*":"regexp"}]

Azure Logic Apps

JP1PC-AZURE-LOGICAPP

[{".*":"regexp"}]

The following shows how the items in the above table can be defined in the system node definition file.

{
  "meta":{
    "version":"2"
  },
  "allSystem":[
    {
      "id":"functionApp",
      "displayName":"Azure Function App",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-FUNCTIONAPP",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"containerInstance",
      "displayName":"Azure Container Instances",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-CONTAINERINSTANCE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"kubernetesService",
      "displayName":"Azure Kubernetes Service",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-KUBERNETESSERVICE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"fileStorage",
      "displayName":"Azure File Storage",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-FILESTORAGE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"blobStorage",
      "displayName":"Azure Blob Storage",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-BLOBSTORAGE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"serviceBusNamespace",
      "displayName":"Azure Service Bus Namespace",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SERVICEBUSNAMESPACE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"cosmosDb",
      "displayName":"Azure Cosmos DB",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-COSMOSDB",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlDatabase",
      "displayName":"Azure SQL Database",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLDATABASE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlServer",
      "displayName":"Azure SQL Server",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLSERVER",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlManagedInstance",
      "displayName":"Azure SQL Managed Instance",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLMANAGEDINSTANCE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlElasticPool",
      "displayName":"Azure SQL Elastic Pool",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLELASTICPOOL",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"logicApp",
      "displayName":"Azure Logic Apps",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-LOGICAPP",
          "name":[{".*":"regexp"}]
        }
      ]
    }
  ]
}

With the system node definition file configured, an IM management node is displayed under the system node that has the corresponding Azure service name, when the jddcreatetree is run and PromitorSID other than VirtualMachine is created. For PromitorSID of VirtualMachine, an IM management node is displayed under the node that represents the host, without the need for listing the name in the system node definition file.

For details on the system node definition file, see System node definition file (imdd_systemnode.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(g) Changing Ports (optional)

■ Specifying a port number of scrape used by Promitor Scraper

The listen port that the Promitor Scraper uses is specified in the Promitor Scraper runtime configuration file (runtime.yaml).

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

The default port is "20719". If port number is changed, review setup of the firewall and prohibit accessing from outside.

Notes:

It is changed in the Promitor Scraper runtime configuration file (runtime.yaml), not the command line option. If you change this configuration file, you must also change the Promitor discovery configuration file (jpc_file_sd_config_promitor.yml).

For details, see Promitor Scraper runtime configuration file (runtime.yaml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

■ Specifying a port number of scrape used by Promitor Resource Discovery

The listen port that the Promitor Resource Discovery uses is specified in the Promitor Resource Discovery runtime configuration file (runtime.yaml).

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

The default port is "20720". If port number is changed, review setup of the firewall and prohibit accessing from outside.

Notes:

It is changed in the Promitor Resource Discovery runtime configuration file (runtime.yaml), not the command line option. If you change this configuration file, you must also change the Promitor Scraper runtime configuration file (runtime.yaml).

For details, see Promitor Resource Discovery runtime configuration file (runtime.yaml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(9) Setup of Fluentd

(a) Changing Setup of Common Definition file for Log Monitor (for Windows) (optional)

If you want to change the following setup, change setup of log monitoring common definition file:

  • Port number of JP1/IM agent control base

  • Settings of buffer plugin

For details about the log monitoring common definition file, see Log monitoring common definition file (jpc_fluentd_common.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(b) Monitoring the text-formatted log file (for Windows) (required)

If you want to monitor a new text-formatted log file, perform the following steps:

  1. Create a text-formatted log file monitoring definition file.

    Create a text-formatted log file monitoring definition file by copying the original template shown below and renaming it to file that you want to copy.

    Copy source: Agent-path\conf\fluentd_@@trapname@@_tail.conf.template

    Copy destination: Agent-path\conf\user\fluentd_log-monitoring-name_tail.conf

    Copy the template (fluentd_@@trapname@@_tail.conf.template) to create text-formatted log file monitoring definition file. Rename the copy destination file to "fluentd_log-monitoring-name_tail.conf".

    For descriptions of the monitoring text-formatted log file definition file, see Monitoring text-formatted log file definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the Log Monitor Target Definition File (jpc_fluentd_common_list.conf).

    If you want to temporarily stop the logging monitoring of some monitoring definitions File, define by enumerating monitoring definition Files in the log monitoring target definition File.

    For details about the log monitoring target definition File, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If the log monitoring target definition File is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Apply in integrated operation viewer tree.

    For details about Application method, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

Note

If you change the monitoring setup of the textual log file when, for example, the log file trap name of the monitoring definition file is changed, perform steps 2 and 3 above.

(c) Modifying the monitoring settings of the text-formatted log file (for Windows) (optional)

If you want to change the monitoring setup for a text-formatted log file, perform the following steps:

  1. Change text-formatted log file monitoring definition file.

    Modify the created monitor definition file (fluentd_log-monitoring-name_tail.conf).

    For descriptions of the monitoring text-formatted log file definition file, see Monitoring text-formatted log file definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the log monitoring target definition file (jpc_fluentd_common_list.conf).

    In the log monitoring target definition file, define by listing files of the monitoring definition file:

    • When the log monitoring name of the monitoring definition file is changed

    • If you are performing operations that temporarily stop logging for some monitoring definition file

    For details about the log monitoring target definition file, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If the log monitoring target definition File is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    If value in the [Metric Settings] section is changed, the changes are reflected in integrated operation viewer tree. For details about reflection method, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

(d) Deleting the monitoring settings of the text-formatted log file (for Windows) (optional)

To Delete monitoring settings in text-formatted logfile, perform the following steps:

  1. Delete monitoring text-formatted log file definition file.

    Delete the created monitoring definition file (fluentd_log-monitoring-name_tail.conf).

    For details about how to delete configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the log monitoring target definition file (jpc_fluentd_common_list.conf)

    To temporarily stop log monitoring for some monitoring definition files, delete the file names of monitoring definition files defined in the log monitoring target definition file.

  3. Reflect in integrated operation viewer tree.

    For details about reflection method, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

(e) Monitor Windows Event Log (required)

To monitor a new Windows event log, perform the following steps:

  1. Create a Windows event log monitoring definition file.

    Create a Windows event log monitoring definition file by copying the following source template and renaming it to the destination definition file:

    Copy source: Agent-path\conf\fluentd_@@trapname@@_wevt.conf.template

    Copy to: Agent-path\conf\user\fluentd_log-monitoring-name_wevt.conf

    Copy the template (fluentd_@@trapname@@_wevt.conf.template) to create Windows event log monitoring definition file. Rename file of copy destination to "fluentd_log-monitoring-name_wevt.conf".

    For descriptions of Windows event log monitoring definition file, see Windows event log monitoring definition file (fluentd_@@trapname@@_wevt.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit Windows event log monitoring definition file.

    If you want to temporarily stop the logging monitoring of some monitoring definitions file, you must define by listing filenames of the monitoring definition files in Windows event log monitoring definition file.

    For details about the log monitoring target definition file, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If the log monitoring target definition file is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    For details about reflection method, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

Note

If you change the monitoring setup of the textual log file when, for example, the log file trap name of the monitoring definition file is changed, perform steps 2 and 3 above.

(f) Modify the Monitor Setup for Windows Event Log (optional)

If you want to change monitoring settings of Windows event log, perform the following steps:

  1. Change Windows event log monitoring definition file.

    Change the monitoring definition file (fluentd_log-monitoring-name_wevt.conf) that has been created.

    For descriptions of Windows event log monitoring definition file, see Windows event log monitoring definition file (fluentd_@@trapname@@_wevt.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the Log Monitor Target Definition File (jpc_fluentd_common_list.conf)

    Define by listing filenames of monitoring definition files in log monitor target definition file in the following condition:

    • When the log file trap name of the monitor-definition file is changed

    • If you are performing operations that temporarily stop monitoring logs for some monitor-difinition file

    For details about the log monitoring target definition file, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If the log monitoring target definition file is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Apply in integrated operation viewer tree.

    If value in the [Metric Settings] section is changed, the changes are reflected in integrated operation viewer tree. For details about reflection method, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

(g) Delete Monitoring settings of Windows Event Logs (optional)

To delete monitoring settings of Windows event logs, perform the following steps:

  1. Delete Windows event log monitoring definition file.

    Delete the monitoring definition file (fluentd_log-monitoring-name_wevt.conf) that has been created.

    For descriptions of Windows event log monitoring definition file, see Windows event log monitoring definition file (fluentd_@@trapname@@_wevt.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about how to delete configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the log monitoring target definition file (jpc_fluentd_common_list.conf)

    To temporarily stop log monitoring for some monitoring definition files, delete the filenames of monitoring definition files defined in log monitoring target definition file.

  3. Apply in integrated operation viewer tree.

    For details about reflection method, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

(h) Setup of the log metrics definition (optional)

If you want to use the log metrics feature, configure the settings of Fluentd of JP1/IM - Agent in the procedure for enableling the add-on programs and then configure the following settings.

■ Editing a log metrics definition file (defining log metrics)

Create a log metrics definition file (fluentd_any-name_logmetrics.conf) to define input and output plug-in features.

In addition, the log metric to be monitored is specified in the output plug-in function definition of the log metric definition file.

  • When adding log metrics to be monitored:

    Add a new <metric> definition in parallel with the existing <metric> definition.

  • When changing the log metrics to be monitored:

    Change the appropriate <metric> definition.

  • When deleting monitored log metrics

    Delete or comment out all relevant definitions in the log metrics definition file.

For details on sample log metrics definition files, see the section describing the applicable file under 3.15.1(1)(l) Log metrics in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

■ Editing the log monitoring target definition file (adding include statements)

To enable logs to be monitored as defined in the log metrics definition file, add a row that starts with @include to the log monitoring target definition file (jpc_fluentd_common_list.conf), followed by the name of the log metrics definition file you edited in Editing a log metrics definition file (defining log metrics).

For details on sample log monitoring target definition files, see the section describing the applicable file under 3.15.1(1)(l) Fluentd (Log metrics) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

■ Restarting Fluentd

To apply the definitions specified in Editing a log metrics definition file (defining log metrics) and Editing the log monitoring target definition file (adding include statements), restart Fluentd.

For details on starting and stopping services before the restart, see Chapter 10. Starting and stopping JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Administration Guide.

■ Setting up log metrics definition in JP1/IM - Manager

If you want to show time-series data on log metrics when trend information of nodes is displayed in the Trends tab of the integrated operation viewer of JP1/IM - Manager through the log metrics feature, define log metrics to be shown in JP1/IM - Manager.

Use user-specific metric definitions for the log metrics definitions here.

For descriptions, see User-specific metric definition file (metrics_any-Prometheus-trend-name.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(i) Changing Ports (optional)

■ Specifying the port number for scraping used by Fluentd

Specify the port number for scraping used by the fluent-plugin-prometheus plug-in in the log metrics definition file.

Specify the listening-port-number# for port in <source> described in the following example of changes.

Example of changes:

## Input
<worker worker-id-used-for-the-log-metrics-feature>
  <source>
    @type prometheus
    bind 0.0.0.0
    port listening-port-number
    metrics_path /metrics
  </source>
</worker>
 
<worker worker-id>
<source>
(Omitted)
</source>
 
(Omitted below)
#:

The actual Listen port used by Fluentd (log metrics feature) depends on the worker_id of worker used by log metrics feature (the value specified in "worker-id" in log metrics definition file (fluentd_any-name_logmetrics.conf)), and is the port number as shown in the following formula.

24820 + worker_id

If log metrics feature uses 129 worker, the default port number is a sequence number from 24820 to 24948.

■ Changing the port number for scraping used by Prometheus

Change the port number for scraping defined in the Prometheus discovery configuration file to the listening-port-number+worker-id specified in Specifying the port number for scraping used by Fluentd.

Change the listening-port-number for targets in the following example of changes.

- targets:
  - name-of-monitored-host:listening-port-number+worker-id
(Omitted)
  labels:
(Omitted)

(j) Monitoring SAP system logging (optional)

To monitor SAP system's system log information newly, perform Fluentd setting procedure described below along with Script exporter setting procedure described in 1.21.2(12)(d) Setting when executing SAP system log extract command (optional).

Perform the following steps to newly monitor the system log information of SAP system.

  1. Creates the system log information monitoring definition file for SAP system.

    Create a text-formatted log file monitoring definition file by copying sample file (fluentd_sap_syslog_tail.conf). Refer to the File/Directory list in the Appendix A.4 JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide. Change the destination file name to "fluentd_log-monitoring-name_tail.conf".

    For details about text-formatted log file monitoring definition file descriptions, see text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edits the log monitoring target definition file (jpc_fluentd_common_list.conf).

    If you want to temporarily stop log monitoring of some monitor definition files, list the names of the monitor definition files in the log monitor target definition file and define them.

    For details about the log monitor target definition file descriptions, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. You do not need to edit the log monitoring target definition file if you have not already done so.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    For details on how to reflect, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

(k) Modify SAP system logging monitoring configuration (optional)

For details about how to change the monitoring settings of SAP system's system log information, see 1.21.2(9)(c) Modifying the monitoring settings of the text-formatted log file (for Windows) (optional).

(l) Remove SAP system logging monitoring configuration (optional)

For details about deleting the monitoring configuration for SAP system's system log information, see 1.21.2(9)(d) Deleting the monitoring settings of the text-formatted log file (for Windows) (optional).

(m) Monitoring CCMS alerting for SAP system (optional)

To newly monitor CCMS alert information in the SAP system, perform the following steps.

  1. Creates a CCMS alert-information monitoring definition file for SAP system.

    Create a text-formatted log file monitoring definition file by copying sample file (fluentd_sap_alertlog_tail.conf). Refer to the File/Directory list in the Appendix A.4 JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide. Change the destination file name to "fluentd_log-monitoring-name_tail.conf".

    For details about text-formatted log file monitoring definition file descriptions, see text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edits the log monitoring target definition file (jpc_fluentd_common_list.conf).

    If you want to temporarily stop log monitoring of some monitor definition files, list the names of the monitor definition files in the log monitor target definition file and define them.

    For details about the log monitor target definition file descriptions, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. You do not need to edit the log monitoring target definition file if you have not already done so.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    For details on how to reflect, see 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

(n) Modify SAP system CCMS alert information monitoring settings (optional)

For details about how to change the SAP system CCMS alert information monitoring settings, see 1.21.2(9)(c) Modifying the monitoring settings of the text-formatted log file (for Windows) (optional).

(o) Remove SAP system CCMS alert information monitoring settings (optional)

For details about how to delete SAP system CCMS alert information monitoring settings, see 1.21.2(9)(d) Deleting the monitoring settings of the text-formatted log file (for Windows) (optional).

(10) Setting up scraping definitions

If you want features provided by the add-on programs to be scraped, provide the scraping definitions listed in the following table.

Table 1‒18: Scraping definitions for the features provided by the add-on programs

Feature provided by the add-on programs

OS that the add-on runs on

Exporter or target

Scraping definition

Windows performance data collection

Windows

Windows exporter

Definitions are not required.

Linux process data collection

Linux

Process exporter

AWS CloudWatch performance data collection

Windows and Linux

Yet another cloudwatch exporter

Azure Monitor performance data collection

Promitor

Log metrics

Fluentd

The definition must be provided to use the log metrics feature.

For details on what definition is needed, see 1.21.2(10)(a) Scraping definition for the log metrics feature.

UAP monitoring

Script exporter

A monitoring target script must be set up.

For details on what settings are needed, see 1.21.2(10)(b) Scraping definition for Script exporter.

(a) Scraping definition for the log metrics feature

A user-defined Exporter scrapes logs based on the scraping definition of the log metrics feature.

  • Create a user-specific discovery configuration file (required)

    Create a user-specific discovery configuration file (user_file_sd_config_any-name.yml) and define what should be monitored.

    For details on the user-specific discovery configuration file, see the section describing the applicable file in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on what should be defined for the log metrics feature, see the section describing sample files for the applicable file under 3.15.1(1)(l) Fluentd (Log metrics) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  • Set up scrape_configs in the Prometheus configuration file (required)

    Add the scrape_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

    For details on the Prometheus configuration file, see the section describing the applicable file in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on what should be defined for the log metrics feature, see the section describing sample files for the applicable file under 3.15.1(1)(l) Fluentd (Log metrics) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

(b) Scraping definition for Script exporter

You can specify a scraping definition by using the http_sd_config method by which you run all scripts defined in the Script exporter configuration file (jpc_script_exporter.yml) or the file_sd_config method by which you specify one of the scripts defined in the Script exporter configuration file (jpc_script_exporter.yml) as a params element of scrape_configs in the Prometheus configuration file (jpc_prometheus_server.yml). The default is the http_sd_config method.

For details on the Script exporter configuration file and the Prometheus configuration file, see the section describing the applicable file in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The following shows scraping definition examples.

  • Example of a scraping definition with the http_sd_config method

scrape_configs:
  - job_name: jpc_script_exporter
    http_sd_configs:
      - url: http://installation-host-name:port/discovery
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_script
      - target_label: jp1_pc_exporter
        replacement: JPC Script Exporter
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_trendname
        replacement: script_exporter
      - target_label: jp1_pc_multiple_node
        replacement: jp1_pc_exporter="{job='jpc_script.*',jp1_pc_multiple_node=''}"
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
      - target_label: jp1_pc_agent_create_flag
        replacement: false
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
installation-host-name

Specify the name of the host where Script exporter has been installed, with 1 to 255 characters other than control characters.

port

Specify the port number Script exporter is going to use.

any-category-name

Specify a category ID of the IM management node for the agent SID, with 1 to 255 characters other than control characters.

  • Example of a scraping definition with the file_sd_config method

scrape_configs:
# Example of running a script in a configuration file
  - job_name: any-scraping-job-name-1
    file_sd_configs:
      - files:
        - 'path-to-the-Script-exporter-discovery-configuration-file'
    metrics_path: /probe
    params:
      script: [scripts.name-in-the-Script-exporter-configuration-file]
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_script
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
 
# Example of running a script in a configuration file with additional arguments 1 and 2 added to the script
  - job_name: any-scraping-job-name-2
    file_sd_configs:
      - files:
        - 'path-to-the-Script-exporter-discovery-configuration-file'
    metrics_path: /probe
    params:
      script: [scripts.name-in-the-Script-exporter-configuration-file]
      argument-name-1: [argument-name-1-value]
      argument-name-2: [argument-name-2-value]
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_script
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
any-scraping-job-name

Specify a given scraping job name that is unique on the host, with 1 to 255 characters other than control characters.

path-to-the-Script-exporter-discovery-configuration-file

Specify the path to the Script exporter discovery configuration file (jpc_file_sd_config_script.yml).

any-category-name

Specify a category ID of the IM management node for the agent SID, with 1 to 255 characters other than control characters.

(11) Setting up container monitoring

You use different features and setup procedures to monitor different monitoring targets. The following table lists what feature you will use and where you can see the setup procedure for each monitoring target.

Monitoring target

Feature you use

See the setup procedure in

Red Hat OpenShift

User-defined Prometheus

Subsections following this table.

Kubernetes

Amazon Elastic Kubernetes Service (EKS)

Azure Kubernetes Service (AKS)

Azure's monitoring feature (Promitor)

AKS can be monitored by default.

1.21.2(8) Set up of Promitor

- Setting the Authentication Method for User-Specific Prometheus (Required)

  1. See the following reference and instruction to obtain the secret:

    • Reference

      When registering or acquiring IM client secret for use with your own OSS in 3.7.2 IM client secret in the JP1/Integrated Management 3 - Manager Overview and System Design Guide

    • Instruction

      Click Add/Delete IM Clients from the Option menu of Integrated Operation Viewer window to display the List of IM Clients window, and when you click the Add button, enter the IM client ID from the Add IM Client dialog box to obtain the secret.

  2. Create a secret in Red Hat OpenShift or similar environments so that the obtained secret is set in the Authorization header of the remote write configuration as shown below. For the remote write configuration itself, see remote_write section in 1.21.2(11)(b)Configuring the settings for a connection (required).

    - Authorization Header Configuration for remote write

    • When specifying Basic as the authentication method:

      "Basic " + Client-ID:Base64-encoded-ASCII-string-of-Client-Secret

    • When specifying basic_auth as the authentication method:

      "basic_auth " + Client-ID:Base64-encoded-ASCII-string-of-Client-Secret

    Below is an example definition of a secret created in Red Hat OpenShift. For the creation method, see the manual or documentation of the Red Hat OpenShift you are using.

    - Example Definition of Secret

    The following is an example of the definition when the Client ID obtained in Step 1 is openshift_prometheus and the secret is secret.

    Note: Depending on Red Hat OpenShift specifications, non-Base64-encoded ASCII strings may be set for the Client ID and secret.

      kind: Secret
      apiVersion: v1
      metadata:
        name: jp1-basic-auth
        namespace: openshift-monitoring
      data:
        username: b3BlbnNoaWZ0X3Byb21ldGhldXM=
        password: c2VjcmV0
      type: kubernetes.io/basic-auth

(a) Configuring the settings of scraping through user-defined Prometheus (required)

  • Red Hat OpenShift

    Settings are not required.

    An openshift-monitoring project is installed and a scraping setting is added during installation.

  • Kubernetes and Amazon Kubernetes Service (EKS)

    When Prometheus is not installed, or when Prometheus is installed but the scraping targets listed in the following table are not configured, add a scraping setting.

    Scraping target

    Data you can retrieve

    Metric you can collect

    kube-stat-metrics

    Status data on nodes, pods, and workloads

    See the section describing Key metric items of 3.15.4(2)(a) Red Hat OpenShift in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    node_exporter

    Node's performance data

    kubelet

    Pod's performance data

The subsequent steps are common to Red Hat OpenShift, Kubernetes, and Amazon Kubernetes Service (EKS).

(b) Configuring the settings for a connection (required)

Configure the remote write setting to collect information from user-defined Prometheus.

For details on how to modify the settings for each monitoring target, see 1.21.2(11)(c) Modifying the Prometheus settings (Red Hat OpenShift) and 1.21.2(11)(d) Modifying the Prometheus settings (Amazon Elastic Kubernetes Service (EKS)).

To comply with the specifications of Red Hat OpenShift, configuration items must be written in camel case notation. Therefore, the settings in the following sections must be written using camel case notation.

  • global.external_labels section

  • remote_write section

- Examples of notation changes

Snake Case Notation

Camel Case Notation

source_labels

sourceLabels

target_label

targetLabel

- global.external_labels section
  • jp1_pc_prome_hostname (required)

    Specify the name of the Prometheus host.

  • jp1_pc_prome_clustername (optional)

    Specify the name of the cluster.

    When this label is omitted, the system does not create an IM management node for the cluster.

(Example)

global:
external_labels:
  jp1_pc_prome_hostname: promHost
  jp1_pc_prome_clustername: myCluster
- remote_write section
  • Configure a connection destination

    Specify an endpoint of JP1/IM - Manager (Intelligent Integrated Management Base) as a remote write destination. Specify one endpoint and, in each section, enter (1) and (2) of the settings of the label required for container monitoring. When creating the cluster nodes, specify another endpoint and enter (3).

  • Configure labels necessary for container monitoring

    To give labels necessary for container monitoring, add the statement shown below to the write_relabel_configs section. It does not affect any local storage for user-defined Prometheus because it is relabeled during remote writing.

    When you monitor Red Hat OpenShift, add the following settings at the beggining of (1) Basic settings and (2) Settings for creating the pod nodes.

- source_labels: ['__name__']
regex: 'kube_.*'|container_.*
target_label: instance
replacement: any-value-that-is-unique-within-the-cluster#

#: The value specified here is displayed in the tree as the host on which Kubernetes metric collector runs.

(1) Basic settings

  - source_labels: ['__name__']
    regex: 'kube_job_status_failed_addlabel|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_read_time_seconds_total|node_disk_reads_completed_total|node_disk_write_time_seconds_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
    action: 'keep'
  - source_labels: ['__name__']
    target_label: __name__
    regex: "kube_job_status_failed_addlabel"
    replacement: "kube_job_status_failed"
    action: replace
  - source_labels: ['__name__', 'name']
    regex: '(container_cpu_usage_seconds_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes);'
    action: drop
  - source_labels: ['__name__', 'pod']
    regex: '(container_fs_reads_bytes_total|container_fs_writes_bytes_total);'
    action: drop
  - source_labels: ['__name__','namespace']
    regex: '(kube_pod_|kube_job_|container_).*;(.*)'
    target_label: jp1_pc_nodelabel
    replacement: ${2}
  - source_labels: ['__name__','node']
    regex: 'kube_node_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','daemonset']
    regex: 'kube_daemonset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','deployment']
    regex: 'kube_deployment_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','replicaset']
    regex: 'kube_replicaset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','statefulset']
    regex: 'kube_statefulset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','owner_kind','owner_name']
    regex: '(kube_job_status_failed|kube_job_owner);CronJob;(.*)'
    target_label: jp1_pc_nodelabel
    replacement: $2
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_nodelabel
    replacement: Linux metric collector(Node exporter)
  - source_labels: ['__name__']
    regex: '(kube_pod_|kube_job_|container_).*'
    target_label: jp1_pc_module
    replacement: kubernetes/Namespace
  - source_labels: ['__name__']
    regex: 'kube_node_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/Node
  - source_labels: ['__name__']
    regex: 'kube_daemonset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/DaemonSet
  - source_labels: ['__name__']
    regex: 'kube_deployment_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/Deployment
  - source_labels: ['__name__']
    regex: 'kube_replicaset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/ReplicaSet
  - source_labels: ['__name__']
    regex: 'kube_statefulset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/StatefulSet
  - source_labels: ['__name__','owner_kind']
    regex: '(kube_job_status_failed|kube_job_owner);CronJob'
    target_label: jp1_pc_module
    replacement: kubernetes/CronJob
  - source_labels: ['__name__']
    regex: 'kube_.*|container_.*'
    target_label: jp1_pc_trendname
    replacement: kubernetes
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_trendname
    replacement: node_exporter
  - source_labels: ['__name__']
    regex: 'kube_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Kube Exporter
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Node exporter
  - source_labels: ['__name__']
    regex: 'container_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Kube Exporter
  - source_labels: ['__name__']
    regex: 'kube_.*'
    target_label: job
    replacement: jpc_kube_exporter
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: job
    replacement: jpc_kube_node
  - source_labels: ['__name__']
    regex: 'container_.*'
    target_label: job
    replacement: jpc_kube_exporter
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_category
    replacement: platform
  - source_labels: ['job','instance']
    regex: 'jpc_kube_exporter;([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes metric collector
  - source_labels: ['job','jp1_pc_prome_hostname','jp1_pc_remote_monitor_instance']
    regex: 'jpc_kube_exporter;(.*);'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes metric collector
  - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|mode|failure_type|scope'
    action: 'labelkeep'

(2) Settings for creating the pod nodes

  - source_labels: ['__name__']
    regex: 'kube_pod_owner|kube_pod_status_phase|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes'
    action: 'keep'
  - source_labels: ['__name__', 'name']
    regex: '(container_cpu_usage_seconds_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes);'
    action: drop
  - source_labels: ['__name__', 'pod']
    regex: '(container_fs_reads_bytes_total|container_fs_writes_bytes_total);'
    action: drop
  - source_labels: ['pod']
    target_label: jp1_pc_nodelabel
  - target_label: jp1_pc_module
    replacement: kubernetes/Pod
  - target_label: jp1_pc_trendname
    replacement: kubernetes
  - target_label: jp1_pc_exporter
    replacement: JPC Kube Exporter
  - target_label: job
    replacement: jpc_kube_exporter
  - source_labels: ['instance']
    regex: '([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes metric collector
  - source_labels: [jp1_pc_prome_hostname,jp1_pc_remote_monitor_instance]
    regex: '(.*);'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes metric collector
  - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|mode|failure_type|scope'
    action: labelkeep

(3) Settings for creating the cluster nodes

- source_labels: ['__name__','jp1_pc_prome_clustername']
      regex: '(container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes);(.+)'
      action: 'keep'
  - source_labels: ['__name__', 'name']
    regex: '(container_cpu_usage_seconds_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes);'
    action: drop
  - source_labels: ['__name__', 'pod']
    regex: '(container_fs_reads_bytes_total|container_fs_writes_bytes_total);'
    action: drop
  - source_labels: ['jp1_pc_prome_clustername']
    target_label: jp1_pc_nodelabel
  - target_label: jp1_pc_module
    replacement: kubernetes/Cluster
  - target_label: jp1_pc_trendname
    replacement: kubernetes
  - target_label: jp1_pc_exporter
    replacement: JPC Kube Exporter
  - target_label: job
    replacement: jpc_kube_exporter
  - source_labels: ['instance']
    regex: '([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernete metric collector
  - source_labels: [jp1_pc_prome_hostname,jp1_pc_remote_monitor_instance]
    regex: '(.*);'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes metric collector
  - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|mode|failure_type|scope'
    action: labelkeep
- Configuring PrometheusRule

Register the following settings in PrometheusRule. When you register this setting in PrometheusRule, a new metric named kube_job_status_failed_addlabel is saved. If you are referencing metric for container monitoring outside of JP1/IM, ensure that this metric is not RemoteWrite to anything other than JP1/IM.

spec:
  groups:
  - name: any-name
    rules:
    - expr: kube_job_status_failed * on(job_name, namespace) group_left(owner_kind,owner_name) kube_job_owner
      record: kube_job_status_failed_addlabel

(c) Modifying the Prometheus settings (Red Hat OpenShift)

- Prerequisites

  • As a user with the cluster-admin role, you can access the cluster.

  • OpentShift CLI (oc) has been installed.

- Procedure

  1. Check whether a ConfigMap object is created.

    $ oc -n openshift-monitoring get configmap cluster-monitoring-config
  2. If the ConfigMap object is not created, create a new file.

    $ vi cluster-monitoring-config.yaml
  3. If the ConfigMap object is created, edit a cluster-monitoring-config object in the openshift-monitoring project.

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  4. Add the settings in camel case to data/config.yaml/prometheusK8s.

    Please define the secret obtained in Setting the Authentication Method for User-Specific Prometheus (Required) after the url: field. Below is an example of how to write it, based on the Example Definition of Secret (Example where the Client ID obtained is openshift_prometheus and the secret is secret) in Setting the Authentication Method for User-Specific Prometheus (Required).

    (Example)

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          externalLabels:
            jp1_pc_prome_hostname: promHost
            jp1_pc_prome_clustername: myCluster
          remoteWrite:
          - url: "http://host-name-of-JP1/IM - Manager (Intelligent Integrated Management Base):20703/im/api_system/v1/trendData/write"
            basicAuth:
              username:
                name: jp1-basic-auth
                key: username
              password:
                name: jp1-basic-auth
                key: password
            writeRelabelConfigs:
            - sourceLabels: ['__name__']
              regex: 'kube_.*|container_.*'
              targetLabel: instance
              replacement: any-value-that-is-unique-within-the-cluster
            - sourceLabels: ['__name__']
              regex: 'kube_job_status_failed_addlabel|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_read_time_seconds_total|node_disk_reads_completed_total|node_disk_write_time_seconds_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
              action: 'keep'
            - sourceLabels: ['__name__']
              targetLabel: __name__
              regex: "kube_job_status_failed_addlabel"
              replacement: "kube_job_status_failed"
              action: replace
            - sourceLabels: ['__name__', 'name']
              regex: '(container_cpu_usage_seconds_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes);'
              action: drop
            - sourceLabels: ['__name__', 'pod']
              regex: '(container_fs_reads_bytes_total|container_fs_writes_bytes_total);'
              action: drop
            - sourceLabels: ['__name__','namespace']
              regex: '(kube_pod_|kube_job_|container_).*;(.*)'
              targetLabel: jp1_pc_nodelabel
              replacement: $2
            - sourceLabels: ['__name__','node']
              regex: 'kube_node_.*;(.*)'
              targetLabel: jp1_pc_nodelabel
            - sourceLabels: ['__name__','daemonset']
              regex: 'kube_daemonset_.*;(.*)'
              targetLabel: jp1_pc_nodelabel
            - sourceLabels: ['__name__','deployment']
              regex: 'kube_deployment_.*;(.*)'
              targetLabel: jp1_pc_nodelabel
            - sourceLabels: ['__name__','replicaset']
              regex: 'kube_replicaset_.*;(.*)'
              targetLabel: jp1_pc_nodelabel
            - sourceLabels: ['__name__','statefulset']
              regex: 'kube_statefulset_.*;(.*)'
              targetLabel: jp1_pc_nodelabel
            - sourceLabels: ['__name__','owner_kind','owner_name']
              regex: '(kube_job_status_failed|kube_job_owner);CronJob;(.*)'
              targetLabel: jp1_pc_nodelabel
              replacement: $2
            - sourceLabels: ['__name__']
              regex: 'node_.*'
              targetLabel: jp1_pc_nodelabel
              replacement: Linux metric collector(Node exporter)
            - sourceLabels: ['__name__']
              regex: '(kube_pod_|kube_job_|container_).*'
              targetLabel: jp1_pc_module
              replacement: kubernetes/Namespace
            - sourceLabels: ['__name__']
              regex: 'kube_node_.*'
              targetLabel: jp1_pc_module
              replacement: kubernetes/Node
            - sourceLabels: ['__name__']
              regex: 'kube_daemonset_.*'
              targetLabel: jp1_pc_module
              replacement: kubernetes/DaemonSet
            - sourceLabels: ['__name__']
              regex: 'kube_deployment_.*'
              targetLabel: jp1_pc_module
              replacement: kubernetes/Deployment
            - sourceLabels: ['__name__']
              regex: 'kube_replicaset_.*'
              targetLabel: jp1_pc_module
              replacement: kubernetes/ReplicaSet
            - sourceLabels: ['__name__']
              regex: 'kube_statefulset_.*'
              targetLabel: jp1_pc_module
              replacement: kubernetes/StatefulSet
            - sourceLabels: ['__name__','owner_kind']
              regex: '(kube_job_status_failed|kube_job_owner);CronJob'
              targetLabel: jp1_pc_module
              replacement: kubernetes/CronJob
            - sourceLabels: ['__name__']
              regex: 'kube_.*|container_.*'
              targetLabel: jp1_pc_trendname
              replacement: kubernetes
            - sourceLabels: ['__name__']
              regex: 'node_.*'
              targetLabel: jp1_pc_trendname
              replacement: node_exporter
            - sourceLabels: ['__name__']
              regex: 'kube_.*'
              targetLabel: jp1_pc_exporter
              replacement: JPC Kube Exporter
            - sourceLabels: ['__name__']
              regex: 'node_.*'
              targetLabel: jp1_pc_exporter
              replacement: JPC Node exporter
            - sourceLabels: ['__name__']
              regex: 'container_.*'
              targetLabel: jp1_pc_exporter
              replacement: JPC Kube Exporter
            - sourceLabels: ['__name__']
              regex: 'kube_.*'
              targetLabel: job
              replacement: jpc_kube_exporter
            - sourceLabels: ['__name__']
              regex: 'node_.*'
              targetLabel: job
              replacement: jpc_kube_node
            - sourceLabels: ['__name__']
              regex: 'container_.*'
              targetLabel: job
              replacement: jpc_kube_exporter
            - sourceLabels: ['__name__']
              regex: 'node_.*'
              targetLabel: jp1_pc_category
              replacement: platform
            - sourceLabels: ['job','instance']
              regex: 'jpc_kube_exporter;([^:]+):?(.*)'
              targetLabel: jp1_pc_remote_monitor_instance
              replacement: ${1}:Kubernetes metric collector
            - sourceLabels: ['job','jp1_pc_prome_hostname','jp1_pc_remote_monitor_instance']
              regex: 'jpc_kube_exporter;(.*);'
              targetLabel: jp1_pc_remote_monitor_instance
              replacement: ${1}:Kubernetes metric collector
            - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|mode|failure_type|scope'
              action: 'labelkeep'
  5. Save the file and apply the changes to the ConfigMap object.

    $ oc apply -f cluster-monitoring-config.yaml

(d) Modifying the Prometheus settings (Amazon Elastic Kubernetes Service (EKS))

- Procedure

  1. Create a yml file with a given name (example: my_prometheus_values.yml) and add the settings to the server section.

    • Settings in external_labels

      Add them to the global.external_labels section.

    • Settings in remote_write

      Add them to the remoteWrite section.

    (Example)

    server:
      global:
        external_labels:
          jp1_pc_prome_hostname: promHost
          jp1_pc_prome_clustername: myCluster
      remoteWrite:
        - url: http://host-name-of-JP1/IM - Manager (Intelligent Integrated Management Base):20703/im/api/v1/trendData/write
            write_relabel_configs:
            - sourceLabels: '[__name__]'
              regex: 'kube_job_status_failed|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
              action: keep
      - source_labels: ['__name__','namespace']
        regex: '(kube_pod_|kube_job_|container_).*;(.*)'
        target_label: jp1_pc_nodelabel
        replacement: $2
      - source_labels: ['__name__','node']
        regex: 'kube_node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','daemonset']
        regex: 'kube_daemonset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','deployment']
        regex: 'kube_deployment_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','replicaset']
        regex: 'kube_replicaset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','statefulset']
        regex: 'kube_statefulset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','owner_name']
        regex: 'kube_job_owner;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','instance']
        regex: 'node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_trendname
        replacement: kube_state_metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_trendname
        replacement: node_exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_trendname
        replacement: kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kube state metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Node exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: job
        replacement: jpc_kube_state
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: job
        replacement: jpc_node
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: job
        replacement: jp1_kubelet
  2. Apply the changes.

    helm upgrade prometheus-chart-name prometheus-community/prometheus -n prometheus-namespace -f my_prometheus_values_yaml

(e) Configuring scraping targets (optional)

- Change monitoring targets

If you want to monitor only a part of the monitoring targets in a user environment with JP1/IM, specify the monitoring targets in the write_relabel_configs section. See the following examples.

(Example 1) Specifying a whitelist of specific resources

  - source_labels: ['__name__','pod']
    regex: '(kube_pod_|container_).*;coredns-.*|prometheus'
    action: 'keep'

(Example 2) Specifying a blacklist of all resources

  - source_labels: ['jp1_pc_nodelabel']
    regex: 'coredns-.*|prometheus'
    action: 'drop'

In addition, to monitor metrics that have already been collected in a different aggregation type, add the remote_write section and define it as the type to be monitored.

- Change monitoring metrics

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the metric definition files.

The files you need to edit are as follows:

  • Node exporter metric definition file (metrics_node_exporter.conf)

  • Container monitoring metric definition file (metrics_kubernetes.conf)

For details on these metric definition files, see the sections describing the applicable files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Configuring the system node definition file (imdd_systemnode.conf) (required)

To create a system node described in (e) Tree format in the JP1/Integrated Management 3 - Manager Overview and System Design Guide 3.15.6(1)(i) Creating IM management nodes (__configurationGet method), edit the system node definition file (imdd_systemnode.conf) to configure the setting items listed in the table below. You can specify any values to the items that are not listed in the table.

Table 1‒19: Settings in the system node definition file (imdd_systemnode.conf)

Item

Value

displayname

This specifies the name of the Kubernetes component.

type

It must be specified in uppercase characters as follows:

Kubernetes-Kubernetes-component-name

Kubernetes-component-name is equivalent to one of the names under the Component name column of the section describing the component names monitored by Kubernetes in 3.15.4(2)(b) Kubernetes in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

name

This is specified as:

[{".*":"regexp"}]

The following table shows a setting example of the system node definition file when you create system management nodes for Kubernetes components that are found by default as monitoring targets in the container monitoring metric definition file.

Table 1‒20: Setting example of the system node definition file (imdd_systemnode.conf)

Item

displayName

type

name

Clusters

JP1PC-KUBERNETES-CLUSTER

[{".*":"regexp"}]

Nodes

JP1PC-KUBERNETES-NODE

[{".*":"regexp"}]

Namespaces

JP1PC-KUBERNETES-NAMESPACE

[{".*":"regexp"}]

Deployments

JP1PC-KUBERNETES-DEPLOYMENT

[{".*":"regexp"}]

DaemonSets

JP1PC-KUBERNETES-DAEMONSET

[{".*":"regexp"}]

ReplicaSets

JP1PC-KUBERNETES-REPLICASET

[{".*":"regexp"}]

StatefulSets

JP1PC-KUBERNETES-STATEFULSET

[{".*":"regexp"}]

CronJobs

JP1PC-KUBERNETES-CRONJOB

[{".*":"regexp"}]

Pods

JP1PC-KUBERNETES-POD

[{".*":"regexp"}]

The following shows how the items in the above table and Kubernetes at a higher level can be defined in the system node definition file.

{
  "meta":{
    "version":"2"
  },
  "allSystem":[
    {
      "id":"kubernetes",
      "displayName":"Kubernetes",
      "children":[
        {
          "id":"cluster",
          "displayName":"Clusters",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-CLUSTER",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"namespace",
          "displayName":"Namespaces",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-NAMESPACE",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"node",
          "displayName":"Nodes",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-NODE",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"deployment",
          "displayName":"Deployments",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-DEPLOYMENT",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"daemonset",
          "displayName":"DaemonSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-DAEMONSET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"replicaset",
          "displayName":"ReplicaSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-REPLICASET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"statefulset",
          "displayName":"StatefulSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-STATEFULSET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"cronjob",
          "displayName":"CronJobs",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-CRONJOB",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"pod",
          "displayName":"Pods",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-POD",
              "name":[{".*":"regexp"}]
            }
          ]
        }
      ]
    }
  ]
}

With the system node definition file specified, an IM management node is displayed under the system node that has the corresponding Kubernetes component name, when the jddcreatetree is run.

For details on the system node definition file, see System node definition file (imdd_systemnode.conf) of JP1/IM - Manager in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(12) Editing the Script exporter definition file

(a) Specifying scripts as monitoring targets (required)

- Edit the Script exporter configuration file (jpc_script_exporter.yml)

Edit the Script exporter configuration file (jpc_script_exporter.yml) to define which scripts are to be monitored.

For details on the Script exporter configuration file, see Script exporter configuration file (jpc_script_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(b) Modifying monitoring metrics (optional)

- Edit the Prometheus configuration file (jpc_prometheus_server.yml)

If you want to add metrics collected from the scripts, add them to the metric_relabel_config section in the Prometheus configuration file (jpc_prometheus_server.yml).

For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference).

scrape_configs:
  - job_name: jpc_script_exporter
      ...
    metric_relabel_configs:
      - source_labels:['__name__']
        regex: 'script_success|script_duration_seconds|script_exit_code[Add metrics here]'
- Edit the Script exporter metric definition file (metrics_script_exporter.conf)

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Script exporter metric definition file (metrics_script_exporter.conf).

For details on the Script exporter metric definition file, see Script exporter metric definition file (metrics_script_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(c) Changing Ports (optional)

The listen port used by Script exporter is specified in --web.listen-address option of the script_exporter command.

For details about how to change the options of script_exporter command, see 1.21.2(1)(c) Change command-line options (for Windows), 2.19.2(1)(c) Change command-line options (for Linux). For details about --web.listen-address option, see If you want to change command line options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20722". If port number is changed, review setup of the firewall and prohibit accessing from outside.

Notes:
  • When specifying the host name for this option, the same host name must be set for targets in the Script exporter discovery configuration file (jpc_file_sd_config_script.yml) on the same host. If you specify with http_sd_config, also change url.

  • When specifying an IP address for this option, the host name that is resolved to the IP address specified for this option must be set for targets in the Process exporter discovery configuration file (jpc_file_sd_config_process.yml) on the same host. If you specify with http_sd_config, also change url.

(d) Setting when executing SAP system log extract command (optional)

If you use Script exporter to execute the SAP system log extract command, configure Script exporter configuration file settings using sample file (jpc_script_exporter_sap.yml) of SAP system monitoring.

For details about sample file of Script exporter configuration file for Sample file of Script exporter configuration file for SAP system monitoring (jpc_script_exporter_sap.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details on how to configure the settings, see Sample file of Script exporter configuration file for SAP system monitoring (jpc_script_exporter_sap.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference and 1.21.2(12) Editing the Script exporter definition file.

(13) Setting up Web scenario monitoring function

This section explains how to configure Web exporter when using Web scenario monitoring function.

(a) Setting up JP1/IM - Agent

Set up JP1/IM - Agent to use Web scenario monitoring function.

■ Browser settings on agent host

  • Installing the browser

    Install a browser that runs Web scenario file using Playwright and Web scenario creation function.

    If the browser is already installed, installation is not required.

  • Setting the browser language

    The language used to display the browser must be available in the browser.

    Check whether the language to be used has been added in the setting window of the browser to be used. If not, run Codegen and add the language to be used because monitoring by Playwright may not work properly.

■ Configuring authentication

When a Playwright accesses a monitored Web application, a client authentication or HTTP authentication (Basic authentication) requires settings such as a certificate/password.

Table 1‒21: Monitor Conditions and Required Settings

Monitoring conditions

Required file

Required Settings

Client authentication

Place the client certificate file in PKCS#12 format in any folder on the client host.

  • Import the certificate file into the browser on the client host.

  • Run the following command to register the registry key to use:

    - When using Google chrome

    reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\AutoSelectCertificateForUrls" /v an-integer-of-1-or-more /t "REG_SZ" /d "{\"pattern\":\"client-authentication-required-web-site-URL\",\"filter\":{\"ISSUER\":{\"CN\":\"CN-of-CA-certificate\"}}}"

    - When using Microsoft Edge

    reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\AutoSelectCertificateForUrls" /v an-integer-of-1-or-more /t "REG_SZ" /d "{\"pattern\":\"client-authentication-required-web-site-URL\",\"filter\":{\"ISSUER\":{\"CN\":\"CN-of-CA-certificate\"}}}"

  • Change the following options in Playwright configuration file:

    <Before change>

    headless true

    <After change>

    headless false

Basic authentication

None

In Web scenario file, define a username and password.

For Web Scenario-creation feature, enter the username/password as follows when entering Web website where you want to log in with URL:

http://Username:Password@domain-name:Port/Path...

■ Setting up the environment variables

Set the environment variables so that JP1/IM - Agent hosts can execute npx playwright test commands from the Web exporter, and also use the Web scenario creation support function and trace viewer function. Perform the following steps:

  1. Display the System Properties dialog from Settings - System - About - Related Settings - Advanced System Settings.

  2. Click the Environment Variables button to display the Environment Variables dialog.

  3. Set the system variables as follows.

    Variable name

    Variable value

    Path

    JP1/IM-Agent-Installation-destination-folder\jp1ima\lib\nodejs

If you have already installed nodejs separately, or if you install nodejs in the future, please configure the settings so that the nodejs of JP1/IM - Agent is prioritized.

■ Setting up Web exporter

You must also register a password for the person who runs Web scenarios that you specify for Web exporter configuration file(jpc_web_exporter.yml). For details on registering a password, see jimasecret in Chapter 1. Commands in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

Web exporter configuration file (jpc_web_exporter.yml) and secret key are reflected only when Web exporter is started. A Web exporter restart is required to reflect the Web exporter configuration file (jpc_web_exporter.yml) or secret changes after Web exporter starts.

■ Creating a New Web Scenario File

If you want to create a new Web scenario file, do the following:

  1. Stop Web exporter.

    If Web exporter is already running, it will be stopped.

  2. Create a Web scenario file.

    You use Web Scenario Creation Assistant feature to create Web scenario files. For details on Web scenario creation support function, see the section describing Web Scenario Creation Support Function in 3.15.1(1)(m) Web scenario monitoring function in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    Outputs the code generated by Web scenario creation support function to Web scenario file. If an operation is mistakenly recorded during operation recording and an incorrect code is generated, exit codegen and record the operation again.

    Place the created Web scenario file in the following folders:

    • For physical hosts

      JP1/IM-Agent-Installation-destination-folder\jp1ima\lib\playwright\tests\

    • For logical hosts

      shared-folder\jp1ima\lib\playwright\tests\

    For the codes recorded by Codegen operation, see Operation and operation of browsers that can be recorded and measured as Web scenario described in Web Scenario Creation Function (playwright codegen) in 3.15.1(1)(m) Web scenario monitoring function in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    For the format of Web scenario file, see Web scenario file (any name.spec.ts) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about configuration file editing procedure, see To edit the configuration files (for Windows) in 1.19.3(1)(a) Common way to setup.

■ Setting up Playwright configuration file

Add the definition to Playwright configuration file (jpc_playwright.config.ts) that runs Web scenario file that you created. If a file other than the one stored during installation is used as Playwright configuration file (including when a model file or the original configuration file is copied and created), set the file access privilege to Administrators only.

For details about Playwright configuration file, see Playwright configuration file (jpc_playwright.config.ts) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details about configuration file editing procedure, see To edit the configuration files (for Windows) in 1.19.3(1)(a) Common way to setup.

■ Configuring Web exporter Discovery configuration file

  1. Editing Web exporter Discovery configuration file.

    Edits Web exporter discovery configuration file (jpc_file_sd_config_web.yml) and defines Web scenario filename to perform.

    For details about the settings, see Web exporter discovery configuration file (jpc_file_sd_config_web.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  2. Create a new Web exporter discovery configuration file (optional).

    Create a new discovery configuration file if you want to configure additional scrape jobs that are defined by defaults in Prometheus configuration file, for example, if you want to run scrape on more than one Web exporter with a different timeout.

    For details on how to create it, see Add a Web exporter discovery configuration file (optional) in 1.21.2(3)(i) Add a Web exporter scrape job (for Windows) (optional) .

■ Setting up Web exporter Scrape Jobs

See Section 1.21.2(3)(i) Add a Web exporter scrape job (for Windows) (optional).

■ Starting Web exporter

Start Web exporter.

(14) Setting up VMware exporter

(a) Changing ports (for Windows) (optional)

The listen port used by VMware exporter is specified in--port option of the vmware_exporter commandI will. The changed port is reflected at the timing when VMware exporter service is restarted.

For details about how to change vmware_exporter command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --port option, see vmware_exporter command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is 20732. If you change the port number, review the firewall settings and prohibit external access.

(b) Add, change, or remove sections (for Windows) (mandatory)

For each target (VMware ESXi), VMware exporter must define its connectivity in VMware exporter configuration file (jpc_vmware_exporter.yml). This definition is called a section.

The following sections are defined by default for sample:

Table 1‒22: Sections defined in Initial Settings

Section name

Feature

Default

  • This section sets the first monitoring target.

  • Rewrite the value to use (you must set the host name and port).

  • The section-name "default" is subject to presence checking and should not be deleted or modified.

If you add a target, you must copy default section and configure the section definitions for the number of monitored targets.

If you change the settings, you must restart VMware exporter and Prometheus server services.

Sections are defined in VMware exporter configuration file (jpc_vmware_exporter.yml). For details about VMware exporter configuration file, see VMware exporter configuration file (jpc_vmware_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

Set the section name of the new section to the same name as the section name of the parameter to be set to Prometheus configuration file (jpc_prometheus_server.yml). For details, see 1.21.2(3)(j) Add a VMware exporter scrape job (for Windows only) (optional).

When adding a new section, make sure that there are no duplicate VM names for each section before adding the section.

For the user used to connect to VMware ESXi, use a role other than No Access or Non-Encrypted Administrator.

For details about registering the password used to connect to VMware ESXi, see jimasecret in Chapter 1. Commands in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. The length of the password at the time of secret registration follows the secret control commands (jimasecret). If VMware ESXi has changed the password length limit, set a password with up to 1024 characters.

For instructions on updating VMware exporter configuration file and deploying the certificate file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

If you add a section definition, you need to define a scrape job to scrape using the newly created section from Prometheus server. For Prometheus server settings, see 1.21.2(3)(j) Add a VMware exporter scrape job (for Windows only) (optional).

VMware exporter configuration file (jpc_vmware_exporter.yml) and secret key are reflected only when VMware exporter is started. A VMware exporter restart is required to reflect any VMware exporter configuration file (jpc_vmware_exporter.yml) or secret changes that you have made since VMware exporter was launched.

(c) Registering secrets (mandatory)

Register the secret used to connect to default section of the monitoring target (VMware ESXi) and sections that you add.

Also, if you add any sections, you must register the corresponding secret.

For details on how to register a secret for the monitor target, see jimasecret in Chapter 1. Commands in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

■ Note on Secret Registration

For notes on obfuscating passwords for VMware ESXi with the Secret Management Command (jimasecret), see jimasecret in Chapter 1. Commands in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(d) Certificate Settings (mandatory)

When SSL authentication is performed on a VMware exporter, certificates must be set up on the machine in the same way, regardless of whether they are physical or logical hosts.

One of the following certificates is required:

  • Default certificate

    The certificate that is created when VMware is built.

  • vCenter Server certificate (certificate of the certificate authority)

    A certificate issued by vCenter Server.

  • Own certificate (self-signed server certificate)

    Refers to a certificate issued by the user.

The procedure for each certificate is shown below.

  • For default certificates

    1. Set "ignore_ssl" of VMware exporter configuration file (jpc_vmware_exporter.yml) to True.

      Make sure that there is no problem in operation even if this setting is made, and then perform the setting.

  • For vCenter Server certificates or your own certificate#

    Perform the following steps to import the certificate:

    Note #: If an authentication error occurs in the original certificate, review the created certificate. If the connection cannot be established even after review, check in advance that there are no operational problems, and then refer to the procedure in "Default Certificate".

    - For Windows

    1. Select Run from the Windows Start menu.

    2. In the Run dialog, type mmc and click the OK button.

    3. In Console 1, select File - Add or Remove Snap-ins.

    4. Select Certificates and click the Add button.

    5. Choose Computer account and click the Next button.

    6. Select Local computer and click the Finish button.

    7. Confirm that Certificates (Local Computer) has been added to Selected snap-ins, and click the OK button.

    8. Right-click on Certificates (Local Computer) - Trusted Root Certification Authorities - Certificates, and select the All Tasks - Import menu that appears.

    9. Click the Next button.

    10. In the File name text box, enter the filename of the certificate to be imported and click the Next button.

    11. Select Place all certificates in the following store and click the Next button.

    12. Click the Finish button and then click the OK button.

    - For Linux

    • for Linux 7 or Linux 8#

    Note #: This assumes that ca-certificate packaging is installed on OS.

    1. Run the following command to copy the certificate file.

      # cp certificate-file-path/etc/pki/ca-trust/source/anchors

    2. Update the truststore configuration.

      <Example for updating the system-wide trust store settings>

      # update-ca-trust

    • For Linux 9

    1. Place the certificate file on the server and run the following command:

      # PKICertImport -d . -n certificate-file-path-t "CT,C,C"-a-i ca_root.crt-u L

    • For Oracle Linux 7,Oracle Linux 8, or Oracle Linux 9

      Refer to Linux 7 instructions.

    • For SUSE Linux 12 or SUSE Linux 15

    1. Specify the location of the certificate file in SSL environment variable.

      export CA_CERT = certificate-file-path

      export SERVER_KEY = key-file-path

      export SERVER_CERT = server-certificate-path

    2. Run SUSE Manager setup-command.

      # yast susemanager_setup

    • For Amazon Linux 2023

      Refer to Linux 7 instructions.

(15) Setting Windows exporter (Hyper-V monitoring function)

(a) Changing ports (optional)

The listen port used by Windows exporter(Hyper-V Monitor is specified in --telemetry.addr option of the windows_exporter commandI will.

For details about how to change windows_exporter command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details on --telemetry.addr options, see windows_exporter command options (Hyper-V Monitoring) in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is 20734. If you change the port number, review the firewall settings and prohibit external access.

(16) Specifying a listening port number and listening address (optional)

If you neither change the default port numbers nor limit IP addresses to listen to, you can skip this step.

For details on what you should modify if you want to change port numbers or IP addresses to listen to, see 1.21.2 Settings of JP1/IM - Agent (for Windows) and 2.19.2 Settings of JP1/IM - Agent (for Linux).

(17) Firewall's Setup (for Windows) (required)

You must setup the firewall to restrict external accessibility as follows:

Table 1‒23: Firewall Setup

Port

Firewall Setup

Imagent port

Access from outside is prohibited.

Imagentproxy port

Access from outside is prohibited.

Imagentaction port

Access from outside is prohibited.

Alertmanager port

Access from outside is prohibited.

However, if you want to monitor Alertmanager with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In this case, consider security measures such as limiting source IP address.

Prometheus_server port

Access from outside is prohibited.

However, if you want to monitor Prometheus server with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In this case, consider security measures such as limiting source IP address.

Windows_exporter port

Access from outside is prohibited.

Node_exporter port

Process_exporter port

Ya_cloudwatch_exporter port

Promitor_scraper port

Promitor_resource_discovery port

Blackbox_exporter port

Script_exporter port

Fluentd port

(18) Setup of integrated agent process alive monitoring (for Windows) (optional)

You monitor integrated agent processes in the following ways:

(a) External shape monitoring by other-host Blackbox exporter

Prometheus server and Alertmanager services monitors from Blackbox exporter of integrated agent running on other hosts. The following tables show URL to be monitored.

For details about how to add HTTP monitor of Blackbox exporter, see 1.21.2(6)(c) Add, change, or Delete the monitoring target (for Windows) (required). For details about setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

Table 1‒24: URL monitored by HTTP monitoring of Blackbox exporter

Service

URL to monitor

Prometheus server

http://Host-name-of-integrated-agent:Port-number-of-Prometheus-server/-/healthy

Alertmanager

http://Host-name-of-integrated-agent:Port-number-of-Alertmanager/-/healthy

The following is a sample alert-definition that you want to monitor with HTTP Monitor for Blackbox exporter.

groups:
  - name: service_healthcheck
    rules:
    - alert: jp1_pc_prometheus_healthcheck
      expr: probe_success{instance=~".*:20713/-/healthy"} == 0
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "probe_success"
      annotations:
        jp1_pc_firing_description: "Communication to Prometheus server failed. "
        jp1_pc_resolved_description: "Communication to Prometheus server was successful. "
    - alert: jp1_pc_alertmanager_healthcheck
      expr: probe_success{instance=~".*:20714/-/healthy"} == 0
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "probe_success"
      annotations:
        jp1_pc_firing_description: "Communication to Alertmanager failed. "
        jp1_pc_resolved_description: "Communication to Alertmanager was successful. "

(b) Alive Monitoring Processes by Windows exporter

Windows servicing program are monitored by Windows exporter's process monitoring activity information. The processes to be monitored are described in the following table.

For details about Setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

Table 1‒25: Processes monitored by the Windows exporter

Service

Processes to monitor

Monitored Name

jpc_imagent_service

Agent-path\bin\jpc_imagent_service.exe

Monitoring target 1:imagent

Monitoring target 2:imagent

jpc_imagentproxy_service

Agent-path\bin\jpc_imagentproxy_service.exe

Monitoring target 1:imagentproxy

Monitoring target 2:imagentproxy

jpc_imagentaction_service

Agent-path\bin\jpc_imagentaction_service.exe

Monitoring target 1:imagentaction

Monitoring target 2:imagentaction

jpc_prometheus_server_service

Agent-path\bin\jpc_prometheus_server_service.exe

Monitoring target 1:prometheus

Monitoring target 2:prometheus_server

jpc_alertmanager_service

Agent-path\bin\jpc_alertmanager_service.exe

Monitoring target 1:alertmanager

Monitoring target 2:alertmanager

jpc_windows_exporter_service

Agent-path\bin\jpc_windows_exporter_service.exe

Monitoring target 1:Windows metric collector(Windows exporter)

Monitoring target 2:windows_exporter

jpc_blackbox_exporter_service

Agent-path\bin\jpc_blackbox_exporter_service.exe

Monitoring target 1:RM Synthetic metric collector(Blackbox exporter)

Monitoring target 2:blackbox_exporter

jpc_ya_cloudwatch_exporter_service

Agent-path\bin\jpc_ya_cloudwatch_exporter_service.exe

Monitoring target 1:RM AWS metric collector(Yet another cloudwatch exporter)

Monitoring target 2:ya_cloudwatch_exporter

jpc_fluentd_service

Agent-path\bin\jpc_fluentd_service.exe

  • When to Use fluentd

    Monitoring target 1:fluentd_win Log trapper (Fluentd)

    Monitoring target 2:fluentd

  • When using only log metrics feature

    Monitoring target 1:fluentd_prome_win Log trapper (Fluentd)

  • Monitoring target 2:fluentd

jpc_script_exporter_service

Agent-path\bin\jpc_script_exporter_service.exe

  • When using the UAP monitoring function

    Monitoring target 1: Script metric collector(Script exporter)

    Monitoring target 2: script_exporter

  • When using the job monitoring function

    Monitoring target 1: JP1/AJS3 metric collector(Script exporter)

    Monitoring target 2: script_exporter

jpc_promitor_scraper_service

Agent-path\bin\jpc_promitor_scraper_service.exe

Monitoring target 1: RM Promitor

Monitoring target 2: promitor_scraper

jpc_promitor_resource_discovery_service

Agent-path\bin\jpc_promitor_resource_discovery_service.exe

Monitoring target 1: RM Promitor

Monitoring target 2: promitor_resource_discovery

Here is a sample alert-definition that Windows exporter monitors:

groups:
 - name: windows_exporter
  rules:
   - alert: jp1_pc_procmon_monitoring-target-1
     expr: absent (windows_process_start_time {instance="imahost:20717", job = "jpc_windows", jp1_pc_exporter= "JPC Windows exporter", jp1_pc_nodelabel= "jpc_monitoring-target-2_service", process = "jpc_monitoring-target-2_service"}) = 1.
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "windows_process_start_time"
      annotations:
        jp1_pc_firing_description: "The number of processes was less than the threshold Value (1). "
        jp1_pc_resolved_description: "The number of processes exceeded the threshold Value (1). "
  • Specify integrated agent host name in imahost part. Specify port number of Windows exporter in 20717 part

  • For monitoring target 1 and monitoring target 2, specify the monitoring target name of Table 1-25 Processes monitored by the Windows exporter.

  • If you specify more than one alert definition, repeat setup multiple times after the line starting with " - alert:".

(c) Monitoring with Prometheus server up metric

Windows exporter service, Blackbox exporter service, and Yet another cloudwatch exporter service are monitored through Prometheus server alert-monitoring. For details about setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

Here is a sample alert-definition that monitors up metric:

groups:
  - name: exporter_healthcheck
    rules:
    - alert: jp1_pc_exporter_healthcheck
      expr: up{jp1_pc_remote_monitor_instance=""} == 0 or label_replace(sum by (jp1_pc_remote_monitor_instance,jp1_pc_exporter) (up{jp1_pc_remote_monitor_instance!=""}), "jp1_pc_nodelabel", "${1}", "jp1_pc_remote_monitor_instance", "^[^:]*:([^:]*)$") == 0
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "up"
      annotations:
        jp1_pc_firing_description: " Communication to Exporter failed. "
        jp1_pc_resolved_description: " Communication to Exporter was successful. "

(19) Creation and import of IM management node tree data (for Windows) (required)

Follow the steps below to create and import IM management node tree.

  1. If you add a new integrated agent host or change hostname of integrated agent host, start service JP1/IM agent control base on that host.

  2. Start add-on program and the integrated agent control base service in the same host, when you add an new add-on program or change program settings leads to configuration changes.

  3. After all integrated agent host have started the corresponding service in steps 1 and 2, wait for one minute# after starting the service.

    #: If value of the scrape_interval of Prometheus configuration file (jpc_prometheus_server.yml) has been changed, wait for that value time.

  4. Perform the steps in Integrated manager host.

    For details about the procedure, see steps 2 to 5 in 1.19.3(1)(c)Creation and import of IM management node tree data (for Windows) (required).

(20) Security-product exclusion Setup (for Windows) (optional)

If you are deploying antivirus software or security products, setup the following directories to exclude them:

(21) Notes on updating the definition file (for Windows)

If you restart the service of JP1/IM - Agent to reflect the updated content of definition files, monitoring stops during restart of the service is ongoing.