Hitachi

JP1 Version 13 JP1/Integrated Management 3 - Manager Configuration Guide


1.21.2 Settings of JP1/IM - Agent

Organization of this subsection

(1) Common way for Setting

(a) Edit the configuration files (for Windows)

Configuration file is stored in conf directory. There are two ways to modify the content of configuration file:

  • The way to use integrated operation viewer

  • The way to Login and Setup the Hosts

About setting files that can be edited when you are using integrated operation viewer, see the notes of the definition file about JP1/IM - Agent (JP1/IM - Agent control base) in List of definition files in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If you Login host and Setup, All configuration files can be edited.

■ How to use integrated operation viewer

  1. Download configuration file from integrated operation viewer.

    Select file you want to edit from integrated operation viewer and download it.

    If you want to add a defined file that you can optionally create, do the following:

    1. Download user-created definition file list definition file.

    2. Write information about definition file you want to add in user-created definition file list definition file

    3. Upload user-created definition file list definition file.

    4. Upload definition file that you want to add.

  2. Edit the downloaded file.

    Note

    Because format check can be done for Prometheus server defined file with promtool command, it is recommended to be checked at this point.

    promtool command is included with Prometheus server. Prometheus server can be downloaded from GitHub website. Use the same version as Prometheus server that came with your JP1/IM - Agent.

    Version of add-on program of JP1/IM - Agent can be checked in the add-on function list in the List of Integrated Agents window of integrated operation viewer or in the addon_info.txt file stored in "Agent-path\addon_management\add-on-name\".

  3. Upload the edited file with integrated operation viewer.

    Setup is automatically reflected when uploaded.

■ How to Login and Setup the Hosts

  1. Login to integrated agent host.

  2. Stop JP1/IM - Agent servicing.

  3. Edit the configuration files.

    Note

    Because format check can be done for Prometheus server defined file with promtool command, it is recommended to be checked at this point.

  4. Start JP1/IM - Agent service.

(b) Changing service definition file (for Windows)

Service definition file storage destination and file name are as follows:

  • Storage destination: Installation-destination-folder\jp1ima\bin\

  • File name: jpc_service name_service.xml

    Important
    • If you make changes to service definition file items, you will need to restart the service or reinstall the service# to apply the changes. For details about what you need to do to import each items, see When the definitions are applied in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    • If you change any of the items that require service reinstallation#, you must disable registration of the service and enable again after that. For details about how to enable or disable registration of service, see 1.21.1(1) Enable or disable add-on program.

    #

    Reinstalling a service means that you delete the service and then create the service again (using the jpc_service command.).

To change service definition file, follow these steps:

  1. Login to integrated agent host.

  2. Stop JP1/IM - Agent service.

  3. Edit service definition file.

  4. Start JP1/IM - Agent service.

(c) Change command-line options (for Windows)

Change the command-line options in service definition file <arguments> tag.

For how to edit, see 1.21.2(1)(b) Changing service definition file (for Windows).

(2) Setup for JP1/IM - Agent control base

(a) Change Integrated manager to connect to (for Windows) (optional)

  1. Stop JP1/IM - Agent service.

  2. Change Integrated manager to connect to.

    Change the destination Integration Manager defined in imagent common configuration file (jpc_imagentcommon.json) to the new destination.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Check initial secret.

    Check initial secret (secret for first-time connection) with integrated operation viewer in Integrated manager hosting.

    For details, see 2.2.3 Show Initial Secret window in the JP1/Integrated Management 3 - Manager GUI Reference.

  4. Obfuscate and register initial secret.

    In the Secret Manager command, obfuscate initial secret and register it.

    Jimasecret -add -key immgr.initial_secret -s " initial secret "
  5. Delete Individual secret.

    In the Secret Management command, Delete Individual secrets.

    Jimasecret -rm -key immgr.client_secret
  6. Modify a certificate

    For details on how to change CA certificate, see 1.21.2(2)(c) Place CA certificate (for Windows) (optional).

    This step is not required if authentication station that issued the server certificate for Integrated manager to which the old connection was made and imbase to which the new connection was made are the same.

  7. Start JP1/IM - Agent.

(b) Change the port (for Windows) (optional)

The listen port that JP1/IM - Agent control base uses is specified in imagent configuration file (jpc_imagent.json) and imagentproxy configuration file (jpc_imagent_proxy.json).

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

For details about the default port number, see Appendix C.1(2) Port numbers used by JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

If you change the port of imagentproxy process, you must change Remote Write destination of Prometheus, the alert notification destination of Alertmanager, and the alert notification destination of Fluentd. For details for each change, see below.

(c) Place CA certificate (for Windows) (optional)

This setup is required to encrypt communication between JP1/IM - Agent management base and JP1/IM - Agent control base. If you do not want to encrypt, this setup is not required.

For instructions on deploying a CA certificate, see 9.4.5 Settings for JP1/IM - Agent (JP1/IM agent control base).

■ To verify the server certificate of JP1/IM - Agent management base

  1. Place CA certificate.

    Place CA certificate of authentication station that issued the server certificate of imbase you are connecting to in the following directory:

    • In Windows

      Agent-path\conf\user\cert\

    • In Linux

      /opt/jp1ima/conf/user/cert/

  2. Provide CA certificate path in imagent Common configuration file (jpc_imagentcommon.json).

  3. Restart imagent and imagentproxy.

■ Not to verify the server certificate of JP1/IM - Agent management base

  1. Set "true" in the tls_config.insecure_skip_verify of imagent shared configuration file (jpc_imagentcommon.json) tls_config.insecure_skip_verify.

(d) Modify settings related to Action Execution (for Windows) (optional)

Setup for Action Execution is defined in imagent configuration file (jpc_imagent.json).

For details about how to set, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(e) Setup the proxy authentication's authentication ID and Password (for Windows) (optional)

If there is a proxy server between agent host and manager host that requires Basic authentication, authentication ID and Password must be setup.

Set authentication ID to the immgr.proxy_user of imagent shared configuration file (jpc_imagentcommon.json). For details about Setting of each definition files, see 1.21.2(1)(a) Edit the configuration files (for Windows).

You set Password in the following ways: For details, see the explanation for each item.

  • Secret management command

    For details, see jimasecret in Chapter 1. Commands in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  • List of Secrets dialogue of integrated operation viewer

    For details, see 2.2.2(4) List of Secrets dialog in the JP1/Integrated Management 3 - Manager GUI Reference.

  • Integrated operation viewer Secret Management REST API

    For details, see 5.4.3 Initial secret issue in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Change the user of Action Execution (mandatory)

Change action.username and action.domainname in imagent configuration file (jpc_imagent.json). For setup procedure, see 1.21.2(1)(a) Edit the configuration files (for Windows). Furthermore, the password of the defined user is necessary to be registered using the jimasecret command.

(3) Setup of Prometheus server

(a) Changing Ports (For Windows) (Optional)

The listen port used by Prometheus server is specified in --web.listen-address option of prometheus command.

For details about how to change prometheus command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --web.listen-address option, see prometheus command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. definition file in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20713". If port number is changed, review setup of the firewall and prohibit accessing from outside. However, if you want to monitor Prometheus server with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In such cases, consider security measures such as limiting the source IP address as required.

(b) To Add the alert definition (for Windows) (optional)

Alert definitions are defined in alert configuration file (jpc_alerting_rules.yml).

For details on how to edit alert configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

For details about the items that require setup in the alert definition, see Alert rule definition for converting to JP1 events in (a) Alert evaluation function of 3.15.1(3) Performance data monitoring notification function in the JP1/Integrated Management 3 - Manager Overview and System Design Guide. For details of the individual items, see Alert configuration file (jpc_alerting_rules.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details about sample of the alert-definition, see settings(the default Status) of model file in following definition Files.

  • Node exporter metric definition file (metrics_node_exporter.conf)

  • Windows exporter metric definition file (metrics_windows_exporter.conf)

  • Blackbox exporter metric definition file (metrics_blackbox_exporter.conf)

  • Yet another cloudwatch exporter metric definition file (metrics_ya_cloudwatch_exporter.conf)

    Important

    The following are the points for when you create an alert definition:

    • Monitoring performance function on JP1/IM - Agent allows you to specify a duration (for). If the alert condition is met continuously during the specified period, it is judged as a firing.

    • If you want to detect that metric is present, use absent() function in the alerting criteria.

      Absent (metric {label})

    • If you want the alert to be enabled for a certain duration, use PromQL for the alert criteria to setup it.

      (Example) When monitoring from 8 o'clock to 12 o'clock in Japan time

      Alert-condition and ON() (23 <= hour() or 0 <= hour() < 3)

      Note that "hour()" returns UTC time, so you need to consider UTC.

    • Monitoring performance function on JP1/IM - Agent notifies you of the firing and resolved. If you want to be notified in two stages: Warning and abnormal, create alerts for Warning and alerts for abnormal.

    • Message that is displayed when an alert occurs can include the following:

      - Message at firing

      - Message on resolved

    • For details about the variables that can be embedded in alert Message, see 3.15.1(3)(a) Alert evaluation function in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    Important

    Please set alert definitions so that total number of time-series data sets that each alert definitions assesment is less than 150.

    For example, when 3 alerts of Windows network interfaces are defined and there are 4 network interfaces on Windows host, 4 time-series data sets collected by Windows exporter are generated.

    Therefore, it means 4 time-series data sets are assesmented by 3 alerts, total number of time-series data comes to 4*3 =12.

(c) Add Blackbox exporter scrape job (for Windows) (optional)

Prior to Add a Blackbox exporter scrape job, you must add the module to configuration file on Blackbox exporter. For details, see 1.21.2(6)(b) Add, change, and delete modules (for Windows) (optional).

After you Add the module, perform the following steps to setup a scrape job that scrape the newly created module:

  1. Create a discovery configuration file for your Blackbox exporter.

    Copy the original model File shown below and rename it to the definition File of Copy destination to create a discovery configuration file for Blackbox exporter.

    - When performing HTTP/HTTPS monitoring

    • For Windows:

      Copy source: Agent-path\conf\jpc_file_sd_config_blackbox_http.yml.model

      Copy to: Agent-path\conf\modules starting with file_sd_config_blackbox_http

    • For Linux:

      Copy source: /opt/jp1ima/conf/jpc_file_sd_config_blackbox_http.yml.model

      Copy to: /opt/jp1ima/conf/file_sd_config_blackbox name begins with http.yml

    - When performing ICMP monitoring

    • For Windows:

      Copy source: Agent-path\conf\jpc_file_sd_config_blackbox_icmp.yml.model

      Copy to: Agent-path\conf\file_sd_config_blackbox_module name begins with icmp.yml

    • For Linux:

      Copy source: /opt/jp1ima/conf/jpc_file_sd_config_blackbox_icmp.yml.model

      Copy to: /opt/jp1ima/conf/file_sd_config_blackbox name begins with icmp.yml

    The module name indicates the module that was added by 1.21.2(6)(b) Add, change, and delete modules (for Windows) (optional).

  2. Edit the discovery configuration files in Blackbox exporter.

    • For monitoring HTTP/HTTPS Discovery configuration file

    For descriptions, see Blackbox exporter (HTTP/HTTPS monitoring) discovery configuration file (jpc_file_sd_config_blackbox_http.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    • For monitoring ICMP Discovery configuration file

    For descriptions, see Blackbox exporter (ICMP monitoring) discovery configuration file (jpc_file_sd_config_blackbox_icmp.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  3. Use integrated operation viewer to add definition File.

    For instructions on how to add a definition File, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  4. Add a scrape job to Prometheus configuration file.

    • When performing HTTP/HTTPS monitoring

    In Prometheus configuration file (jpc_prometheus_server.yml), Copy the Definition of Scrape Job with Job Name "jpc_blackbox_http "to add a new scrape job.

    • When performing ICMP monitoring

    In Prometheus configuration file (jpc_prometheus_server.yml), Copy definition of Scrape Job with Job Name "jpc_blackbox_icmp" to Add a new scrape job.

    <Sample Setup>

    scrape_configs:
      - job_name: Any scrape job name
        metrics_path: /probe
        params:
          module: [module-name]
        file_sd_configs:
          - files:
            - 'Discovery configuration file Name'
        relabel_configs:
          (Omitted)
        metric_relabel_configs:
          (Omitted)
    Any scrape job name

    Specify any name that does not overlap with any other scrape job name, in the range of 1 to 255 characters, except for control characters.

    Module name

    Specify the module name that was added in 1.21.2(6)(b) Add, change, and delete modules (for Windows) (optional).

    Discovery configuration file Names

    Specify File that you created in step 1.

    For descriptions of Prometheus configuration file, see <scrape_config> in Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about editing Prometheus configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(d) Add user-defined Exporter scrape job (for Windows) (optional)

To scrape user-defined Exporter, you need the following setup:

  • Add for user-specific discovery configuration file

  • Editing Prometheus configuration file (jpc_prometheus_server.yml)

For details about how to Add and edit File each definition files, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  1. Add user-specific discovery configuration file.

    Specify user-defined Exporter that you want to scrape to user-specific discovery configuration file.

    For descriptions, see User-specific discovery configuration file (user_file_sd_config_any-name.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  2. Add a scrape job to Prometheus configuration file.

    In Prometheus configuration file (jpc_prometheus_server.yml), add the scrape job to scrape user-defined Exporter.

    Scrape jobs are listed in scrape_configs.

    <Sample Setup>

    scrape_configs:
      - job_name: Scrape job name
     
        file_sd_configs:
          - files:
            - Discovery configuration file Names
     
        relabel_configs:
          - target_label: jp1_pc_nodelabel
            replacement: Label-name of IM management node
     
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: ' metric 1| Metric 2| Metric 3'
            action: 'keep'
    Scrape job name

    Specify an arbitrary string. This Value is setup on job label of metric.

    Discovery configuration file Names

    Specify File of user-specific discovery configuration file created in step 1 above.

    Label-name of IM management node

    Specify the character string that integrated operation viewer displays on IM management node label. This is not a control character.

    When URL is encoded, the character string must be within 234 bytes (the upper limit for multibyte characters is 26). The label-name that overlap with label-name of the IM management nodes created by JP1/IM - Agent cannot be specified. Specify the character string that is not already specified in same host. If the character string that is already specified, the configuration information SIDs become same, IM management nodes are not created properly.

    Metric 1, metric 2, metric 3

    Specify metric that you want to collect. If there is more than one metric to be collected, separate them with |.

    If you want to collect all metrics, you do not need to include "metric_relabel_configs". However, if a large amount of metric is present, the amount of data will be large. Therefore, we recommend that you list "metric_relabel_configs" and limit it to metric to be monitored.

  3. Add metric Definition file.

    Add metric Definition file for user-defined Exporter.

    For descriptions, see User-specific metric definition file (metrics_any-Prometheus-trend-name.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(e) Changing Remote Write destination (for Windows) (optional)

Specifies URL and ports of imagentproxy processes running on the same host in the remote_write.url of Prometheus configuration file (jpc_prometheus_server.yml) for Remote Write destination. You need to change it only if you want to change imagentproxy process port.

<Sample Setup>

remote_write:
- url: http://localhost:20727/ima/api/v1/proxy/service/promscale/writehttp://localhost:xxxxxx/xxxxxxxxxx

For instructions on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(f) Configuring service monitoring settings (For Windows) (Optional)

To use the service monitoring function in an environment where the version is upgraded and installed from JP1/IM - Agent 13-00 to 13-01 or later, configure the following settings. This setting is not required if JP1/IM - Agent 13-01 or later is newly installed.

■ Editing Prometheus configuration file (jpc_prometheus_server.yml)

If "windows_service_state" is not set to keep metric in the metric_relabel_configs settings of the jpc_windows scrape job, add the settings. Also, if the same metric_relabel_configs setting does not set a relabel config for "windows_service_.*" metric, add the setting. Add the underlined settings as follows:

(Omitted)
scrape_configs:
(Omitted)
  - job_name: 'jpc_windows'
(Omitted)
    metric_relabel_configs:
      - source_labels: ['__name__']
        regex: 'windows_cs_physical_memory_bytes|windows_cache_copy_read_hits_total|(Omitted)|windows_process_working_set_peak_bytes|windows_process_working_set_bytes|windows_service_state'
        action: 'keep'
      - source_labels: ['__name__']
        regex: 'windows_process_.*'
        target_label: 'jp1_pc_trendname'
        replacement: 'windows_exporter_process'
      - source_labels: ['__name__','process']
        regex: 'windows_process_.*;(.*)'
        target_label: 'jp1_pc_nodelabel'
        replacement: ${1}
      - source_labels: ['__name__']
        regex: 'windows_service_.*'
        target_label: 'jp1_pc_trendname'
        replacement: 'windows_exporter_service'
      - source_labels: ['__name__']
        regex: 'windows_service_.*'
        target_label: 'jp1_pc_category'
        replacement: 'service'
      - source_labels: ['__name__','name']
        regex: 'windows_service_.*;(.*)'
        target_label: 'jp1_pc_nodelabel'
        replacement: ${1}
      - regex: jp1_pc_multiple_node
        action: labeldrop

■ Editing Windows exporter discovery configuration file (jpc_file_sd_config_windows.yml)

If "windows_service_state" is not set in the jp1_pc_multiple_node settings, add the underlined settings as shown below.

- targets:
  - host name:20717
  labels:
    jp1_pc_exporter: JPC Windows exporter
    jp1_pc_category: platform
    jp1_pc_trendname: windows_exporter
    jp1_pc_multiple_node: "{__name__=~'windows_process_.*|windows_service_.*'}"

(g) Configure the settings when the label name (jp1_pc_nodelabel value) of the IM management node exceeds the upper limit (for Windows) (Optional)

IM management node label-name (jp1_pc_nodelabel value) can be up to 234 bytes of URL encoded text (If all are multibyte characters, the limit is 26 characters). If the limit is exceeded, you must change the jp1_pc_nodelabel value in the metric_relabel_configs of the Prometheus configuration file (jpc_prometheus_server.yml). If you do not change the value, IM management node with that value is not created when you create IM management node.

If you want to change the value, add the following underlined settings to the metric_relabel_configs settings for Prometheus configuration file (jpc_prometheus_server.yml) scrape job: When changing the value of multiple targets, add the setting only for the number of monitored targets.

■ Editing Prometheus configuration file (jpc_prometheus_server.yml)

(Omitted)
scrape_configs:
(Omitted)
  - job_name: 'scrape job name'
(Omitted)
    metric_relabel_configs:
(Omitted)
      - source_labels: ['jp1_pc_nodelabel']
        regex: 'Regular expression to match the value before the jp1_pc_nodelabel change'
        target_label: 'jp1_pc_nodelabel'
        replacement: 'Value after jp1_pc_nodelabel change'

(h) Setting for executing the SAP system log extract command using Script exporter (for Windows) (Optional)

Perform the following steps:

  1. Edit the scrape definition in the Prometheus configuration file.

    To execute SAP system log extract command using the http_sd_config method of Script exporter, change scrape definition of Script exporter as shown below.

    • Editing Prometheus configuration file (jpc_prometheus_server.yml)

    (Omitted)
    scrape_configs:
    (Omitted)
     
      - job_name:  'jpc_script'
     
        http_sd_configs:
          - url: 'http://installation hostname:port/discovery'
    (Omitted)
     
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: 'script_success|script_duration_seconds|script_exit_code'
            action: 'keep'
          - source_labels: [jp1_pc_script]
            target_label: jp1_pc_nodelabel
          - source_labels: [jp1_pc_script]
            regex: '.*jr3slget.*|.*jr3alget.*'
            target_label: 'jp1_pc_category'
            replacement: 'enterprise'
          - regex: (jp1_pc_script|jp1_pc_multiple_node|jp1_pc_agent_create_flag)
            action: labeldrop

(4) Setup of Alertmanager

(a) Changing Ports (For Windows) (Optional)

The listen port used by Alertmanager is specified in --web.listen-address option of alertmanager command.

For details about how to change alertmanager command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --web.listen-address option, see alertmanager command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20714". If port number is changed, review setup of the firewall and prohibit accessing from outside. However, if you want to monitor Alertmanager with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In such cases, consider security measures such as limiting the source IP address as required.

(b) Changing the alert notification destination (for Windows) (optional)

To specify Alert destinations, write the URL and port of imagentproxy processes running on the same host to recieivers.webhook_config.url in Alertmanager configuration file (jpc_alertmanager.yml). You need to change it only if you want to change imagentproxy process port.

For instructions on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(c) Setup silence (for Windows) (optional)

Execute the command from JP1/IM - Manager to the host where Alertmanager whose silence you want to setup is running. Use curl command to call REST API that setup silence.

For REST API on how to setup silence, see 5.21.4 Silence creation of Alertmanager in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

Settings in silence to be specified in message body of the request is passed in curl command argument.

(5) Setup of Windows exporter

(a) Change Port (Optional)

The listen port used by Windows exporter is specified in --telemetry.addr option of the windows_exporter command.

For details about how to change windows_exporter command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --telemetry.addr option, see windows_exporter command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20717". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Modify metric to Collect (Optional)

  1. Add metric to Prometheus configuration file.

    In the metric_relabel_configs of Prometheus configuration file (jpc_prometheus_server.yml), metric to be collected are defined separated by "|". Delete metric that you do not need to collect and Add metric that you want to collect.

    For instructions on updating configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    <Sample Setup>

      - job_name: 'jpc_windows'
        :
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: 'windows_cache_copy_read_hits_total| Windows_cache_copy_reads_total| Windows_cpu_time_total| Windows_logical_disk_free_bytes| Windows_logical_disk_idle_seconds_total| Windows_logical_disk_read_bytes_total|....|windows_net_packets_sent_total| Windows_net_packets_received_total| Windows_system_context_switches_total| Windows_system_processor_queue_length| Windows_system_system_calls_total [Add metric here]'
  2. If required, define a trend view in metric Definition file.

    In Windows exporter and Windows exporter (process monitoring) metric definition files, you define a trend view.

    For descriptions, see Windows exporter metric definition file (metrics_windows_exporter.conf) and Windows exporter (process monitoring) metric definition file (metrics_windows_exporter_process.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  3. Configure the service monitor settings.

    - Editing Windows exporter configuration file (jpc_windows_exporter.yml)

    When performing service monitoring, Windows exporter configuration file (jpc_windows_exporter.yml) is edited as shown below.

    collectors:
      enabled: cache,cpu,logical_disk,memory,net,system,cs,process,service
    collector:
      logical_disk:
        volume-whitelist: ".+"
        volume-blacklist: ""
      net:
        nic-whitelist: ".+"
        nic-blacklist: ""
      process:
        whitelist: ""
        blacklist: ""
      service:
        services-where: "WQL's Where phrase"
    scrape:
      timeout-margin: 0.5

    If "service" is not set for "enabled" of "collectors", add the "service" setting.

    If "service" of "collector" is not set, add "service" and "services-where" lines. The value of "services-where" is Where phrase of WQL and sets the service name of the service to be monitored in the format "Name='service-name'". If the service name is set to exact match and you want to monitor more than one service, connect them with a OR and set them in the format "Name='service-name' OR Name='service-name' OR ...".

    - Sample definitions of Windows exporter configuration file (jpc_windows_exporter.yml)

    The following is a sample definition for monitoring jpc_imagent and jpc_imagentproxy servicing:

    collectors:
      enabled: cache,cpu,logical_disk,memory,net,system,cs,process,service
    collector:
      logical_disk:
        volume-whitelist: ".+"
        volume-blacklist: ""
      net:
        nic-whitelist: ".+"
        nic-blacklist: ""
      process:
        whitelist: ""
        blacklist: ""
      service:
        services-where: "Name='jpc_imagent' OR Name='jpc_imagentproxy'"
    scrape:
      timeout-margin: 0.5

(c) Specifying monitored processes (required)

- Edit the Windows exporter configuration file (jpc_windows_exporter.yml)

Edit the Windows exporter configuration file (jpc_windows_exporter.yml) to define which processes are to be monitored.

By default, no process is to be monitored, and therefore you will specify the processes you want to monitor in the Windows exporter configuration file.

For details on the Windows exporter configuration file, see Windows exporter configuration file (jpc_windows_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(6) Setup of Blackbox exporter

(a) Changing Ports (For Windows) (Optional)

The listen port used by Blackbox exporter is specified in --web.listen-address option of the blackbox_exporter command.

For details about how to change blackbox_exporter command options, see 1.21.2(1)(c) Change command-line options (for Windows). For details of --web.listen-address option, see blackbox_exporter command options in Service definition file (jpc_program-name_service.xml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20715". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Add, change, and delete modules (for Windows) (optional)

For each target (host or URL), it must be defined monitoring methods, such as protocols and authentication data in Blackbox exporter.

The following modules are defined in the default setup:

Table 1‒12: Modules Defined in the Initial Setup

Module name

Feature

http

  • Monitor http/https.

  • The method is "GET" and the headers are not setup.

  • Client authentication, Server authentication, and HTTP authentication (Basic authentication) are not performed.

  • When http/https's URL is accessed and a status code in 200-299 is returned, 1 is setup to the probe_success (metric).

  • If communication to URL is not possible or if the status code is not in 200-299, 0 is setup to metric.

  • If the target is redirected, it depends on the status code of the redirected target.

icmp

  • Monitor icmp.

  • Authentication is not performed.

  • If icmp communication can be performed for the host or IP address to be monitored, 1 is setup to metric. If communication is not possible, 0 is setup.

If monitoring is possible with the module of the default setup, there is no need to define a new one. If there are requirements that cannot be monitored by the module in the initial setup, as shown below, the module definition must be added.

  • When authentication is required

  • To change the judgment based on the content of the response

Modules are defined in Blackbox exporter configuration file. For descriptions, see Blackbox exporter configuration file (jpc_blackbox_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The following shows description rules:

  • Setup the new module-name as follows:

    • When performing HTTP/HTTPS monitoring

      Setup the name starting with http.

    • For ICMP monitoring

      Setup the name starting with icmp.

  • If you are creating a client-side authentication, a Server authentication, or a HTTP authentication (Basic authentication) module, you will need a certificate and a setup of password.

    For the location of the certificate, see the list of files/directories in the Appendix A.4 JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    For details about HTTP authentication (Basic authentication) Password's Setting method, see 1.21.2(6)(e) Setup the proxy authentication ID and Password (for Windows) (Optional) and 1.21.2(6)(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (for Windows) (optional).

    Table 1‒13: Monitoring requirements and required setup

    Monitoring conditions

    Required File

    Required Setup

    Server authentication

    Place CA certificate of authentication station that issued the server certificate of the target in Agent-path\conf\user/cert.

    Setup the contents below to tls_config of Blackbox exporter configuration file.

    • Setup ca_file for CA certificate path

    • Setup false to the insecure_skip_verify

    No server authentication

    None.

    Setup the contents below to the tls_config of Blackbox exporter configuration file.

    • Setup true to insecure_skip_verify

    Client authentication

    • Place the client certificate in Agent-path\conf\cert.

    • Place the client certificate key File in Agent-path\conf\user\secret.

    Setup the contents below to tls_config of Blackbox exporter configuration file.

    • Setup the client certificate path to cert_file

    • Setup the client certificate key File to key_file

    No client authentication

    None.

    None.

    Basic authentication

    None.

    Setup the contents below to basic_auth of Blackbox exporter configuration file.

    • Setup User name used for Basic authentication in username

    For details about Basic authentication's Password's Setup, see 1.21.2(6)(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (for Windows) (optional).

For instructions on updating Blackbox exporter configuration file and deploying the certificate File, see 1.21.2(1)(a) Edit the configuration files (for Windows).

If it needs to from Blackbox exporter to access the monitored Web Server through a proxy server, the proxy server's setup is required. See 1.21.2(6)(d) Monitoring HTTP through proxy (for Windows) (optional)".

If you Add the module definition, you will need to define a scrape job to scrape with the newly created module from Prometheus server. For details about setup on Prometheus server, see "1.21.2(3)(c) Add Blackbox exporter scrape job (for Windows) (optional)".

(c) Add, change, or Delete the monitoring target (for Windows) (mandatory)

Monitoring targets of Blackbox exporter are listed in definition file in the following tables.

After you Add the targets, you must refresh IM management node tree. For details, see "1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory)".

  • Blackbox exporter (HTTP/HTTPS monitoring) discovery configuration file

    Item

    Description

    File Name

    • Jpc_file_sd_config_blackbox_http.yml

    • file_sd_config_blackbox_module name begins http.yml

    Setup target

    Define the monitoring target of HTTP/HTTPS.

    Format

    See Blackbox exporter (HTTP/HTTPS monitoring) discovery configuration file (jpc_file_sd_config_blackbox_http.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    Update procedure

    See 1.21.2(1)(a) Edit the configuration files (for Windows).

  • Blackbox exporter (ICMP monitoring) discovery configuration file

    Item

    Description

    File Name

    • Jpc_file_sd_config_blackbox_icmp.yml

    • file_sd_config_blackbox_module name begins with icmp.yml

    Setup target

    Define the monitoring target of ICMP.

    Format

    See Blackbox exporter (ICMP monitoring) discovery configuration file (jpc_file_sd_config_blackbox_icmp.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    Update procedure

    See 1.21.2(1)(a) Edit the configuration files (for Windows).

(d) Monitoring HTTP through proxy (for Windows) (optional)

Setup "proxy_url" to Blackbox exporter configuration file (jpc_blackbox_exporter.yml).

For details about the Blackbox exporter configuration file, see Blackbox exporter configuration file (jpc_blackbox_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details on updating Blackbox exporter configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

Note that authentication ID and Password must be setup when authentication is routed through the required proxies. For setting method of authentication ID and Password, see 1.21.2(6)(e) Setup the proxy authentication ID and Password (for Windows) (Optional).

When you skip DNS resolution and change of URL performed by Blackbox exporter, it is necessary to set skip_resolve_phase_with_proxy true in Blackbox exporter configuration file (jpc_blackbox_exporter.yml). For details and the example of case that setting of skip_resolve_phase_with_proxy is necessary, see Blackbox exporter configuration file (jpc_blackbox_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(e) Setup the proxy authentication ID and Password (for Windows) (Optional)

When performing HTTP/HTTPS monitoring, if there is a proxy server that requires a Basic authentication between Blackbox exporter and the monitored Web Server, authentication ID and Password must be setup.

Authentication ID is specified in "modules. module-name. http.proxy_user" of Blackbox exporter configuration file (jpc_blackbox_exporter.yml). For details about Setting method of Blackbox exporter configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

Set Password in the following ways: For details, refer to the explanation for each item.

  • Secret management command

  • List of Secrets dialogue of integrated operation viewer

  • REST API of Secret Management of Integrated operation viewer

(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (for Windows) (optional)

When you perform HTTP/HTTPS monitoring, you must setup authentication ID, Password, and Bearer tokens if Basic authentication is required for accessing the monitored Web Server.

Authentication ID is specified in "modules. module-name. http.basic_auth".username" of Blackbox exporter configuration file (jpc_blackbox_exporter.yml). For details about setting method of Blackbox exporter configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

Password and Bearer tokens are setup in the following ways: For details, refer to the explanation for each item.

  • Secret management command

  • List of Secrets dialogue of integrated operation viewer

  • REST API of Secret Management of Integrated operation viewer

(7) Setup in Yet another cloudwatch exporter

(a) Changing Ports (For Windows) (Optional)

The listen port used by Yet another cloudwatch exporter is specified in -listen-address option of yet-another-cloudwatch-exporter command.

For details about how to change the options of yet-another-cloudwatch-exporter command, see 1.21.2(1)(c) Change command-line options (for Windows). For details of -listen-address option, see yet-another-cloudwatch-exporter command options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager, Command Definition File and API Reference.

The default port is "20718". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Modify Setup to connect to CloudWatch (for Windows) (optional)

There are two ways to connect to CloudWatch from Yet another cloudwatch exporter: using an access key (hereinafter referred to as the access key method) and using an IAM role (hereinafter referred to as the IAM role method). If you install Yet another cloudwatch exporter on a host other than AWS/EC2, you can only use the access key method. If you are installing Yet another cloudwatch exporter on AWS/EC2, you can use the access key method or the IAM role method.

The procedure for connecting to CloudWatch is described in the following four patterns.

  • Access Key Method (Part 1)

    Connect to CloudWatch as an IAM user in your AWS account

  • Access Key Method (Part 2)

    Create multiple IAM users in your AWS account with the same role, and connect to CloudWatch with IAM users in this role

  • IAM Role Method (Part 1)

    Connect to CloudWatch with an AWS account for which you have configured an IAM role

  • IAM Role Method (Part 2)

    Connect to CloudWatch with multiple AWS accounts with the same IAM role

- When connecting to CloudWatch with access method (part 1)

  1. Create an IAM policy "yace_policy" in your AWS account (1) and set the following JSON format information.

    {

    "Version": "2012-10-17",

    "Statement": [

    {

    "Sid": "CloudWatchExporterPolicy",

    "Effect": "Allow",

    "Action": [

    "tag:GetResources",

    "cloudwatch:ListTagsForResource",

    "cloudwatch:GetMetricData",

    "cloudwatch:ListMetrics"

    ],

    "Resource": "*"

    }

    ]

    }

  2. Create an IAM group "yace_group" in the AWS account (1) and assign the IAM policy "yace_policy" created in step 1.

  3. Create IAM user "yace_user" in AWS account (1) and belong to the IAM group "yace_group" created in step 2.

  4. On the host of the monitoring module, create a credentials file in the "/root/.aws/" directory, and set the access key and secret access key of the IAM user "yace_user" created in step 3 in the [default] section of the credentials file.

- When connecting to CloudWatch with access method (part 2)

  1. Create IAM policy "yace_policy" in AWS account (2) and set the same JSON format information as in step 1 of Access method (Part 1).

  2. Create the IAM role "cross_access_role" in AWS account (2), select "Another AWS account" for [Select trusted entity type], and specify the account ID of AWS account (1) as the account ID.

  3. Assign the IAM policy "yace_policy" created in step 1 to the IAM role "cross_access_role" created in step 2.

  4. Create IAM policy "yace_policy" in AWS account (1) and set the same JSON format information as in step 1 of Access method (Part 1).

  5. Create IAM policy "account2_yace_policy" in AWS account (1) and set the following JSON format information.

    {

    "Version": "2012-10-17",

    "Statement": [

    {

    "Effect": "Allow",

    "Action": "sts:AssumeRole",

    "Resource": "arn:aws:iam::AWS Account(2):role/cross_access_role"

    }

    ]

    }

    The underlined "cross_access_role" is the name of IAM role created in step 2.

  6. Create an IAM group "yace_group" in your AWS account (1), and assign the IAM policy "yace_policy" created in step 1 and the IAM policy "account2_yace_policy" created in step 5.

  7. Create IAM user "yace_user" in AWS account (1) and belong to the IAM group "yace_group" created in step 6.

  8. On the host of the monitoring module, create a credentials file in the "/root/.aws/" directory, and set the access key and secret access key of the IAM user "yace_user" created in step 7 in the [default] section of the credentials file.

  9. Add the following definition# of AWS account (2) to the Yet another cloudwatch exporter configuration file (ya_cloudwatch_exporter.yml).

    discovery:

    exportedTagsOnMetrics:

    AWS/S3:

    - jp1_pc_nodelabel

    jobs:

    - type: AWS/S3

    regions:

    - us-east-2

    metrics:

    - name: BucketSizeBytes

    statistics:

    - Sum

    period: 300000

    length: 400000

    nilToZero: true

    - type: AWS/S3

    regions:

    - us-east-2

    roles:

    - roleArn: "arn:aws:iam::AWS Account(2):role/cross_access_role"

    metrics:

    - name: BucketSizeBytes

    statistics:

    - Sum

    period: 300000

    length: 400000

    nilToZero: true

    #

    Lines 1 to15 show the collection settings of AWS account (1), and lines 17 and later show the collection settings of AWS account (2).

    In the collection settings of AWS account (2), "roles.roleArn" must be specified. You can specify up to two AWS accounts for "roles.roleArn", but if you want to specify two or more accounts, please contact Hitachi Sales.

- When connecting to CloudWatch using the IAM role method (Part 1)

  1. Create IAM policy "yace_policy" in AWS account (1) and set the same JSON format information as in step 1 of Access method (Part 1).

  2. Create an IAM role "yace_role" in your AWS account (1), and select AWS service for [Select trusted entity type] and EC2 for [Select use case].

  3. Assign the IAM policy "yace_policy" created in step 1 to the IAM role "yace_role" created in step 2.

  4. Assign the IAM role "yace_role" created in steps 2~3 to the EC2 instance where the monitoring module of AWS account (1) is installed#.

    #

    Open the EC screen of the AWS console and execute it in the menu of [Action] - [Security] - [Change IAM Role].

- When connecting to CloudWatch using the IAM role method (part 2)

  1. Create IAM policy "yace_policy" in AWS account (2) and set the same JSON format information as in step 1 of Access method (Part 1).

  2. Create the IAM role "cross_access_role" in AWS account (2), select "Another AWS account" for [Select trusted entity type], and specify the account ID of AWS account (1) as the account ID. Also, specify an external ID if necessary.

  3. Create IAM policy "account2_yace_policy" in AWS account (1) and set the following JSON format information.

    {

    "Version": "2012-10-17",

    "Statement": [

    {

    "Effect": "Allow",

    "Action": "sts:AssumeRole",

    "Resource": "arn:aws:iam::AWS Account(2):role/cross_access_role"

    }

    ]

    }

    The underlined "cross_access_role" is the name of IAM role created in step 2.

  4. Create an IAM role "yace_role" in your AWS account (1), and select AWS service for [Select trusted entity type] and EC2 for [Select use case].

  5. Assign the IAM policy "account2_yace_policy" created in step 3 to the IAM role "yace_role" created in step 4.

  6. Assign the IAM role "yace_role" created in step 4 to the EC2 instance where the monitoring module of the AWS account (1) is installed.#

    #

    Open the EC screen of the AWS console and execute it in the menu of [Action] - [Security] - [Change IAM Role].

  7. Add the following definition# of AWS account (2) to the Yet another cloudwatch exporter configuration file (ya_cloudwatch_exporter.yml).

    discovery:

    exportedTagsOnMetrics:

    AWS/S3:

    - jp1_pc_nodelabel

    jobs:

    - type: AWS/S3

    regions:

    - us-east-2

    roles:

    - roleArn: "arn:aws:iam:: AWS Account(2):role/cross_access_role"

    externalId: " External ID"

    metrics:

    - name: BucketSizeBytes

    statistics:

    - Sum

    period: 300000

    length: 400000

    nilToZero: true

    #

    Lines 9~11 show the collection settings for AWS account (2).

    In the collection settings of AWS account (2), "roles.roleArn" must be specified. You can specify up to two AWS accounts for "roles.roleArn", but if you want to specify two or more accounts, please contact Hitachi Sales.

    Specify "externalId" in the collection settings of your AWS account (2) only if you specified an external ID in step 2.

(c) Connect to CloudWatch through a proxy (for Windows) (optional)

If you need to connect to CloudWatch through a proxy, use the environment variable HTTPS_PROXY (the environment variable HTTP_PROXY is not available).

The format of value specified in the environment-variable HTTPS_PROXY is shown below.

http://proxy-user-name:password@proxy-server-host-name:port-number
Important

Note that value begins with "http://" in HTTPS_PROXY of the environment variable-name.

■ For Windows

  1. Stop Yet another cloudwatch exporter.

  2. Open the System Properties dialog from [Setup] - [System] - [Detailed Information] - [Related settings] - [System Detail settings].

  3. Click the [Environment Variable] to display the Environment Variables dialog box.

  4. Setup the system environment as follows.

    Variable Name

    HTTPS_PROXY

    Value

    http://proxy-user-name:password@proxy-server-host-name:port-number

  5. Start Yet another cloudwatch exporter.

    Important
    • Because the environment variable HTTPS_PROXY is setup to the system environment variable, it is reflected in all processes running on that host.

    • It is important to note that system environment variables can be displayed by anyone who can Login them. When Password is specified in the environment-variable HTTPS_PROXY, measures such as limiting the number of users who can login the system are required.

■ For Linux

  1. Stop Yet another cloudwatch exporter.

  2. Create any File and describe it as follows:

    HTTPS_PROXY=http://proxy-user-name:Password@proxy-server-host-name:port-number

    For details of what to write, execute man systemd.exec and check value that has been Setup to "EnvironmentFile=".

  3. Add EnvironmentFile to unit definition file and write file path created in step 2.

      :
    [Service]
    EnvironmentFile = "path-of-file-created-in-step-2"
    WorkingDirectory = ....
    ExecStart = ....
      :
  4. Refresh systemd.

    Execute the following command:

    systemctl daemon-reload
  5. Start Yet another cloudwatch exporter.

(d) Add AWS Services to be Monitored (Optional)

The following six AWS services are monitored by default: If you want to monitor other AWS services, follow the steps here.

  • AWS/EC2

  • AWS/Lambda

  • AWS/S3

  • AWS/DynamoDB

  • AWS/States

  • AWS/SQS

  1. Add AWS service definition in Yet another cloudwatch exporter configuration file.

    For details about editing, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    Add AWS service definition to the underlined sections below.

    • discovery.exportedTagsOnMetrics

    - Description

    discovery:
      exportedTagsOnMetrics:
        AWS-service-name:
          - jp1_pc_nodelabel

    - Sample Setup

    discovery:
      exportedTagsOnMetrics:
        AWS/EC2:
          - jp1_pc_nodelabel
    • discovery.jobs

    - Description

    discovery:
      : 
      jobs:
      - type: AWS-service-name
        regions:
          - AWS-region
        period: 0
        length: 600
        delay: 120
        metrics:

    - Sample Setup

    discovery:
      : 
      jobs:
      - type: AWS/EC2
        regions:
          - ap-northeast-1
        period: 0
        length: 600
        delay: 120
        metrics:
  2. Add metric you want to collect.

    See 1.21.2(7)(f) Modify metric to Collect (Optional).

(e) Monitoring AWS Resources (Optional)

For AWS resource to be monitored by Yet another cloudwatch exporter, the jp1_pc_nodelabel tag must be setup to AWS resource that you want to monitor. See AWS documentation for how to set the tags for AWS resource.

For jp1_pc_nodelabel tag, setup the following value: Specify an alphanumeric character or hyphen within the range of 1 to 255 characters.

  • For EC2

    Specify the host name.

  • Other than EC2

    Specifies the text that is labeled in IM management node.

    Important
    • Setup a string that is unique within AWS services. You can setup the same string for different services - for example, EC2 and Lambda.

    • Accounts with different YACE monitoring destinations must be different string. Even in different regions, for the same service, use different strings.

    • If a string is duplicated, only one IM management node is created.

The value set in jp1_pc_nodelabel tags is added as the value of jp1_pc_nodelabel label of samples collected by Yet another cloudwatch exporter.

(f) Modify metric to Collect (Optional)

  1. Verify metric collected on CloudWatch.

    Verify that metric that you want to collect is collected on CloudWatch.

    In addition, you must have verified setup for CloudWatch metric name and CloudWatch statistic types in preparation for setup in the following steps.

    For details about CloudWatch metric name and CloudWatch statistical types, see "Amazon CloudWatch User Guide" in AWS documentation.

  2. Add definition of CloudWatch metric to Yet another cloudwatch exporter configuration file.

    The underlined sections of discovery.jobs.metrics below describe CloudWatch metric definitions.

    discovery:
       : 
      jobs:
      - type: AWS Service name
        regions:
          - AWS region
        period: 0
        length: 600
        delay: 120
        metrics:
          - name: CloudWatch-metric-name-1#1
            statistics:
            - CloudWatch-statistic-types#2
          - name: CloudWatch-metric-name-2#3
            statistics:
            - CloudWatch-statistic-types#4
          - name: CloudWatch-metric-name-3#5
            statistics:
            - CloudWatch-statistic-types#6
            :

    #1 Example of 1 CloudWatch metric name1: CPUUtilization

    #2 Sample 2 CloudWatch statistical types: Average

    #3 Example of 3 CloudWatch metric name2: DiskReadBytes

    #4 Sample 4 CloudWatch statistical types: Sum

    #5 Example of 5 CloudWatch metric name3: DiskWriteBytes

    #6 Sample 6 CloudWatch statistical types: Sum

  3. Add metric to Prometheus configuration file.

    Value of metric_relabel_configs lists metric to collect, separated by |. Add metric that you want to collect. Also, Delete metric that does not need to be collected. For the naming conventions for metric names, see Naming conventions for Exporter metrics in 3.15.1(1)(g) Yet another cloudwatch exporter (Azure Monitor performance data collection capability) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    For details on editing Prometheus configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    <Sample Setup>

      - job_name: 'jpc_cloudwatch'
        : 
        metric_relabel_configs:
          - regex: 'tag_(jp1_pc_.*)'
            replacement: ${1}
            action: labelmap
          - regex: 'tag_(jp1_pc_.*)'
            action: 'labeldrop'
          - source_labels: ['__name__','jp1_pc_nodelabel']
            regex: '(aws_ec2_cpuutilization_average|aws_ec2_disk_read_bytes_sum|aws_ec2_disk_write_bytes_sum|aws_lambda_errors_sum|aws_lambda_duration_average|aws_s3_bucket_size_bytes_sum|aws_s3_5xx_errors_sum|aws_dynamodb_consumed_read_capacity_units_sum|aws_dynamodb_consumed_write_capacity_units_sum|aws_states_execution_time_average|aws_states_executions_failed_sum|aws_sqs_approximate_number_of_messages_delayed_su m| aws_sqs_number_of_messages_deleted_sum [Add metrics here as "|" separated by]);. +$'
            action: 'keep'
  4. If required, define a trend view in metric definition file.

    For descriptions, see Yet another cloudwatch exporter metric definition file (metrics_ya_cloudwatch_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(8) Set up of Promitor

If you use Promitor for monitoring, configure the following settings.

(a) Configuring the settings for establishing a connection to Azure (required)

- Modify the service definition file (for Windows) or the unit definition file (for Linux)

You specify the storage location of the Promitor configuration file with an absolute path in the PROMITOR_CONFIG_FOLDER environment variable. Modify this environment variable, which is found in the service definition file (for Windows) or the unit definition file (for Linux). For details on the service definition file (for Windows) and the unit definition file (for Linux), see the sections describing the applicable files under Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

- Modify the Promitor Scraper runtime configuration file (runtime.yaml)

In the Promitor Scraper runtime configuration file (runtime.yaml), specify the path to the Promitor Scraper configuration file (metrics-declaration.yaml) in metricsConfiguration.absolutePath. For details on the Promitor Scraper runtime configuration file (runtime.yaml), see the section describing the applicable file under Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

- Configure information for connecting to Azure

Configure authentication information used for Promitor to connect to Azure. For details on how to configure it, see 1.21.2(8)(b) Configuring authentication information for connecting to Azure.

(b) Configuring authentication information for connecting to Azure

Promitor can connect to Azure through the service principal method or the managed ID method. Only the service principal method is available when Promitor is installed in hosts other than Azure Virtual Machines. Both the service principal method and the managed ID method are available when Promitor is installed in Azure Virtual Machines.

The following describes three procedures for connecting to Azure.

  • Service principal method

    This uses a client secret to connect to Azure.

  • Managed ID method (system-assigned)

    This uses a system-assigned managed ID to connect to Azure.

  • Managed ID method (user-assigned)

    This uses a user-assigned managed ID to connect to Azure.

- Using the service principal method to connect to Azure

Perform steps 1 to 3 in Azure Portal and then perform steps 4 to 6 on a host where Promitor has been installed.

  1. Create an application and issue a client secret.

  2. Obtain an application (client) ID in Overview for the application.

  3. Select a resource group (or subscription) to be monitored, and then select Access control (IAM) and Add role assignment.

  4. Add the client secret value under the Value column issued in step 1 to JP1/IM - Agent.

    Specify the values in the table below for keys used to register the secret.

    Secret registration key

    Value

    Promitor Resource Discovery key

    Promitor.resource_discovery.env.AUTH_APPKEY

    Promitor Scraper key

    Promitor.scraper.env.AUTH_APPKEY

    For details on how to register the secrets, see the description in 3.15.10 Adding, modifying, and removing secrets under 3.15.10(2) Secret obfuscation of JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    Important

    When building in a container environment, you cannot perform this step before you create a container image. Create a container, and then perform this step.

  5. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify ServicePrincipal for authentication.mode.

  6. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

- Using the managed ID method (system-assigned) to connect to Azure

Perform steps 1 to 3 in Azure Portal and then perform steps 4 to 6 on a host where Promitor has been installed.

  1. In Virtual Machines, select the Azure Virtual Machine where Promitor has been installed.

  2. Go to Identity and then System assigned, and change Status to On.

  3. In Identity - System assigned, under Permissions, select Azure role assignments and specify Monitoring Reader.

  4. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify SystemAssignedManagedIdentity for authentication.mode.

  5. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

- Using the managed ID method (user-assigned) to connect to Azure

Perform steps 1 to 5 in Azure Portal and then perform steps 6 and 7 on a host where Promitor has been installed.

  1. In the service search, select Managed Identities and then Create Managed Identity.

  2. Specify a resource group, name, and other information to create a managed ID.

  3. In Azure role assignments, assign Monitoring Reader.

  4. In Virtual Machines, select the Azure Virtual Machine where Promitor has been installed.

  5. Select Identity, User assigned, and then Add, and add the managed ID you created in step 2.

  6. In the Promitor Scraper runtime configuration file (runtime.yaml) and the Promitor Resource Discovery runtime configuration file (runtime.yaml), specify UserAssignedManagedIdentity for authentication.mode.

  7. In the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml), specify the information on the Azure instance to connect to.

    • Promitor Scraper configuration file (metrics-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureMetadata.

    • Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml)

      Specify the information on the Azure instance to connect to for azureLandScape.

(c) Configuring a proxy-based connection to Azure (optional)

If your connection to Azure must be established via a proxy, use the HTTPS_PROXY environment variable. For details on how to set it up, see 2.19.2(8)(c) Connect to CloudWatch through a proxy (for Linux) (optional) in 2.19 Setup for JP1/IM - Agent (for UNIX). For NO_PROXY, specify the value of resourceDiscovery.host in the Promitor Scraper runtime configuration file (runtime.yaml).

(d) Configuring scraping targets (required)

- Configure monitoring targets that must be specified separately (required)

Monitoring targets can be detected automatically by default; however, some of the services, like the ones described below, must be detected manually. For these services to be detected, edit the Promitor Scraper configuration file (metrics-declaration.yaml) to specify your monitoring targets separately.

  • Services you must specify separately as monitoring targets

    These services are found as the ones with automatic discovery disabled in the table listing the services Promitor can monitor of 3.15.1(1)(h) Promitor (Azure Monitor performance data collection capability) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  • How to specify monitoring targets separately

    Uncomment a monitoring target in the Promitor Scraper configuration file (metrics-declaration.yaml) and add it to the resources section.

- Change monitoring targets (optional)

In Promitor, monitoring targets can be specified in the following two ways:

  • Specifying monitoring targets separately

    If you want to separately specify Azure resources to be monitored, add them to the Promitor Scraper configuration file (metrics-declaration.yaml).

  • Detecting monitoring targets automatically

    If you want to detect resources in your tenant automatically and monitor Azure resources in them, add them to the Promitor Scraper configuration file (metrics-declaration.yaml) and the Promitor Resource Discovery configuration file (resource-discovery-declaration.yaml).

- Change monitoring metrics (optional)

To change metrics to be collected or displayed:

  1. Confirm that Azure Monitor has collected the metric.

  2. Confirm that Azure Monitor has collected the metric you want to collect. As preparation for the settings in the next step, check the metric name and the aggregation type.

  3. For the metric name, see "Metric" in "Reference > Supported metrics > Resource metrics" in the Azure Monitor documentation. For the aggregation type, see "Aggregation Type" in "Reference > Supported metrics > Resource metrics" in the Azure Monitor documentation.

  4. Edit the settings in the Prometheus configuration file (jpc_prometheus_server.yml).

    If you want to change metrics to be collected, modify the metric_relabel_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

    For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) of JP1/IM - Agent in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  5. Edit the settings in the Promitor metric definition file (metrics_promitor.conf).

    If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Promitor metric definition file (metrics_promitor.conf).

    For details on the Promitor metric definition file, see Promitor metric definition file (metrics_promitor.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(e) Configuring labels for tenant information (optional)

Add labels for the tenant ID and subscription ID of a monitoring target to the property label definition file (property_labels.conf). Otherwise, the tenant and subscription show default in the property of an IM management node and an extended attribute of a JP1 event.

For details on the property label definition file, see Property label definition file (property_labels.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Configuring the system node definition file (imdd_systemnode.conf) (required)

To create a system node described in 3.15.6(1)(i) Tree format in the JP1/Integrated Management 3 - Manager Overview and System Design Guide, edit the system node definition file (imdd_systemnode.conf) to configure the setting items listed in the table below. You can specify any values to the items that are not listed in the table.

Table 1‒14: Settings in the system node definition file (imdd_systemnode.conf)

Item

Value

displayname

Specify the name of the service that publishes metrics for Azure Monitor.

type

It must be specified in uppercase characters as follows:

Azure-Azure-service-name

Azure-service-name is equivalent to one of the names under the Promitor resourceType name column in the table listing the services Promitor can monitor, in 3.15.1(1)(h) Promitor (Azure Monitor performance data collection capability) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

name

This is specified as:

[{".*":"regexp"}]

The following table shows a setting example of the system node definition file when you create system management nodes for Azure services that are found by default as monitoring targets in the Promitor metric definition file (metrics_promitor.conf).

Table 1‒15: Setting example of the system node definition file (imdd_systemnode.conf)

Item

displayName

type

name

Azure Function App

JP1PC-AZURE-FUNCTIONAPP

[{".*":"regexp"}]

Azure Container Instances

JP1PC-AZURE- CONTAINERINSTANCE

[{".*":"regexp"}]

Azure Kubernetes Service

JP1PC-AZURE-KUBERNETESSERVICE

[{".*":"regexp"}]

Azure File Storage

JP1PC-AZURE-FILESTORAGE

[{".*":"regexp"}]

Azure Blob Storage

JP1PC-AZURE-BLOBSTORAGE

[{".*":"regexp"}]

Azure Service Bus Namespace

JP1PC-AZURE-SERVICEBUSNAMESPACE

[{".*":"regexp"}]

Azure Cosmos DB

JP1PC-AZURE-COSMOSDB

[{".*":"regexp"}]

Azure SQL Database

JP1PC-AZURE-SQLDATABASE

[{".*":"regexp"}]

Azure SQL Server

JP1PC-AZURE-SQLSERVER

[{".*":"regexp"}]

Azure SQL Managed Instance

JP1PC-AZURE-SQLMANAGEDINSTANCE

[{".*":"regexp"}]

Azure SQL Elastic Pool

JP1PC-AZURE-SQLELASTICPOOL

[{".*":"regexp"}]

Azure Logic Apps

JP1PC-AZURE-LOGICAPP

[{".*":"regexp"}]

The following shows how the items in the above table can be defined in the system node definition file.

{
  "meta":{
    "version":"2"
  },
  "allSystem":[
    {
      "id":"functionApp",
      "displayName":"Azure Function App",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-FUNCTIONAPP",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"containerInstance",
      "displayName":"Azure Container Instances",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-CONTAINERINSTANCE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"kubernetesService",
      "displayName":"Azure Kubernetes Service",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-KUBERNETESSERVICE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"fileStorage",
      "displayName":"Azure File Storage",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-FILESTORAGE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"blobStorage",
      "displayName":"Azure Blob Storage",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-BLOBSTORAGE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"serviceBusNamespace",
      "displayName":"Azure Service Bus Namespace",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SERVICEBUSNAMESPACE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"cosmosDb",
      "displayName":"Azure Cosmos DB",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-COSMOSDB",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlDatabase",
      "displayName":"Azure SQL Database",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLDATABASE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlServer",
      "displayName":"Azure SQL Server",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLSERVER",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlManagedInstance",
      "displayName":"Azure SQL Managed Instance",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLMANAGEDINSTANCE",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"sqlElasticPool",
      "displayName":"Azure SQL Elastic Pool",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-SQLELASTICPOOL",
          "name":[{".*":"regexp"}]
        }
      ]
    },
    {
      "id":"logicApp",
      "displayName":"Azure Logic Apps",
      "objectRoot":[
        {
          "type":"JP1PC-AZURE-LOGICAPP",
          "name":[{".*":"regexp"}]
        }
      ]
    }
  ]
}

With the system node definition file configured, an IM management node is displayed under the system node that has the corresponding Azure service name, when the jddcreatetree is run and PromitorSID other than VirtualMachine is created. For PromitorSID of VirtualMachine, an IM management node is displayed under the node that represents the host, without the need for listing the name in the system node definition file.

For details on the system node definition file, see System node definition file (imdd_systemnode.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(g) Changing Ports (Optional)

■ Specifying a port number of scrape used by Promitor Scraper

The listen port that the Promitor Scraper uses is specified in the Promitor Scraper runtime configuration file (runtime.yaml).

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

The default port is "20719". If port number is changed, review setup of the firewall and prohibit accessing from outside.

Notes:

It is changed in the Promitor Scraper runtime configuration file (runtime.yaml), not the command line option. If you change this configuration file, you must also change the Promitor discovery configuration file (jpc_file_sd_config_promitor.yml).

For details, see Promitor Scraper runtime configuration file (runtime.yaml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

■ Specifying a port number of scrape used by Promitor Resource Discovery

The listen port that the Promitor Resource Discovery uses is specified in the Promitor Resource Discovery runtime configuration file (runtime.yaml).

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

The default port is "20720". If port number is changed, review setup of the firewall and prohibit accessing from outside.

Notes:

It is changed in the Promitor Resource Discovery runtime configuration file (runtime.yaml), not the command line option. If you change this configuration file, you must also change the Promitor Scraper runtime configuration file (runtime.yaml).

For details, see Promitor Resource Discovery runtime configuration file (runtime.yaml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(9) Setup of Fluentd

(a) Changing Setup of Common Definition file for Log Monitor (for Windows) (Optional)

If you want to change the following setup, change setup of log monitoring common definition file:

  • Integrated agent Control Infrastructure Port number

  • Buffer Plug-In Setup

For details about the log monitoring common definition file, see Log monitoring common definition file (jpc_fluentd_common.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

(b) Monitoring the text-format logging File (for Windows) (Required)

If you want to monitor a new text-format logging File, perform the following steps:

  1. Create a text-formatted log file monitoring definition file.

    Create a text-formatted log file monitoring definition file by Copying the original template shown below and renaming it to File that you want to Copy.

    Copy source: Agent-path\conf\fluentd_@@trapname@@_tail.conf.template

    Copy destination: Agent-path\conf\user\fluentd_log-monitoring-name_tail.conf

    Copy the template (fluentd_@@trapname@@_tail.conf.template) to create text-formatted log file monitoring definition file. Rename the copy destination file to "fluentd_log-monitoring-name_tail.conf".

    For descriptions of the monitoring text-formatted log file definition file, see Monitoring text-formatted log file definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the Log Monitor Target Definition File (jpc_fluentd_common_list.conf).

    If you want to temporarily stop the logging monitoring of some monitoring definitions File, define by enumerating monitoring definition Files in the log monitoring target definition File.

    For details about the log monitoring target definition File, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If the log monitoring target definition File is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Apply in integrated operation viewer tree.

    For details about Application method, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

    Note

    If you change the monitoring setup of the textual log file when, for example, the log file trap name of the monitoring definition file is changed, perform steps 2 and 3 above.

(c) Modifying the Monitoring Setup of the Text-Format Logging File (for Windows) (Optional)

If you want to change the monitoring setup for a textual logging file, perform the following steps:

  1. Change text-formatted log file monitoring definition file.

    Modify the created monitor definition file (fluentd_log file trap name _tail.conf).

    For descriptions of the monitoring text-formatted log file definition file, see Monitoring text-formatted log file definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the log monitoring target definition file (jpc_fluentd_common_list.conf).

    In the log monitoring target definition file, define by listing files of the monitoring definition file:

    • When the log monitoring name of the monitoring definition file is changed

    • If you are performing operations that temporarily stop logging for some monitoring definition file

    For details about the log monitoring target definition file, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If the log monitoring target definition File is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    If value in the [Metric Settings] section is changed, the changes are reflected in integrated operation viewer tree. For details about reflection method, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

(d) Deleting the monitoring settings of the text log file (for Windows) (optional)

To Delete monitoring settings in text-format logfile, perform the following steps:

  1. Delete monitoring text-formatted log file definition file.

    Delete the created monitoring definition file (fluentd_log-monitoring-name_tail.conf).

    For details about how to delete configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the log monitoring target definition file (jpc_fluentd_common_list.conf)

    To temporarily stop log monitoring for some monitoring definition files, delete the file names of monitoring definition files defined in the log monitoring target definition file.

  3. Reflect in integrated operation viewer tree.

    For details about reflection method, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

(e) Monitor Windows Event Log (Required)

To monitor a new Windows event log, perform the following steps:

  1. Create a Windows event log monitoring definition file.

    Create a Windows event log monitoring definition file by copying the following source template and renaming it to the destination definition file:

    Copy source: Agent-path\conf\fluentd_@@trapname@@_wevt.conf.template

    Copy to: Agent-path\conf\user\fluentd_log-monitoring-name_wevt.conf

    Copy the template (fluentd_@@trapname@@_wevt.conf.template) to create Windows event log monitoring definition file. Rename file of copy destination to "fluentd_log-monitoring-name_wevt.conf".

    For descriptions of Windows event log monitoring definition file, see Windows event log monitoring definition file (fluentd_@@trapname@@_wevt.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit Windows event log monitoring definition file.

    If you want to temporarily stop the logging monitoring of some monitoring definitions file, you must define by listing filenames of the monitoring definition files in Windows event log monitoring definition file.

    For details about the log monitoring target definition file, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. If the log monitoring target definition file is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    For details about reflection method, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

    Note

    If you change the monitoring setup of the textual log file when, for example, the log file trap name of the monitoring definition file is changed, perform steps 2 and 3 above.

(f) Modify the Monitor Setup for Windows Event Log (Optional)

If you want to change monitoring settings of Windows event log, perform the following steps:

  1. Change Windows event log monitoring definition file.

    Change the monitoring definition file (fluentd_log-monitoring-name_wevt.conf) that has been created.

    For descriptions of Windows event log monitoring definition file, see Windows event log monitoring definition file (fluentd_@@trapname@@_wevt.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the Log Monitor Target Definition File (jpc_fluentd_common_list.conf)

    Define by listing filenames of monitoring definition files in log monitor target definition file in the following condition:

    • When the log file trap name of the monitor-definition file is changed

    • If you are performing operations that temporarily stop monitoring logs for some monitor-difinition file

    For details about the log monitoring target definition file, see "log monitoring target definition file (jpc_fluentd_common_list)" (2. Definition File) in "JP1/Integrated Management 3 - Manager Command, Definition File and API Reference" manual. If the log monitoring target definition file is not being edited, no editing is required.

    For details on how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Apply in integrated operation viewer tree.

    If value in the [Metric Settings] section is changed, the changes are reflected in integrated operation viewer tree. For details about reflection method, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

(g) Delete Monitoring settings of Windows Event Logs (Optional)

To delete monitoring settings of Windows event logs, perform the following steps:

  1. Delete Windows event log monitoring definition file.

    Delete the monitoring definition file (fluentd_log-monitoring-name_wevt.conf) that has been created.

    For descriptions of Windows event log monitoring definition file, see Windows event log monitoring definition file (fluentd_@@trapname@@_wevt.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about how to delete configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edit the log monitoring target definition file (jpc_fluentd_common_list)

    To temporarily stop log monitoring for some monitoring definition files, delete the filenames of monitoring definition files defined in log monitoring target definition file.

  3. Apply in integrated operation viewer tree.

    For details about reflection method, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

(h) Setup of the log metrics definition (Required)

If you want to use the log metrics feature, configure the settings of Fluentd of JP1/IM - Agent in the procedure for enableling the add-on programs and then configure the following settings.

■ Editing a log metrics definition file (defining log metrics)

Create a log metrics definition file (fluentd_any-name_logmetrics.conf) to define input and output plug-in features.

In addition, the log metric to be monitored is specified in the output plug-in function definition of the log metric definition file.

  • When adding log metrics to be monitored:

    Add a new <metric> definition in parallel with the existing <metric> definition.

  • When changing the log metrics to be monitored:

    Change the appropriate <metric> definition.

  • When deleting monitored log metrics

    Delete or comment out all relevant definitions in the log metrics definition file.

For details on sample log metrics definition files, see the section describing the applicable file under 3.15.1(1)(l) Log metrics in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

■ Editing the log monitoring target definition file (adding include statements)

To enable logs to be monitored as defined in the log metrics definition file, add a row that starts with @include to the log monitoring target definition file (jpc_fluentd_common_list.conf), followed by the name of the log metrics definition file you edited in Editing a log metrics definition file (defining log metrics).

For details on sample log monitoring target definition files, see the section describing the applicable file under 3.15.1(1)(l) Fluentd (Log metrics) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

■ Restarting Fluentd

To apply the definitions specified in Editing a log metrics definition file (defining log metrics) and Editing the log monitoring target definition file (adding include statements), restart Fluentd.

For details on starting and stopping services before the restart, see Chapter 10. Starting and stopping JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Administration Guide.

■ Setting up log metrics definition in JP1/IM - Manager

If you want to show time-series data on log metrics when trend information of nodes is displayed in the Trends tab of the integrated operation viewer of JP1/IM - Manager through the log metrics feature, define log metrics to be shown in JP1/IM - Manager.

Use user-specific metric definitions for the log metrics definitions here.

For descriptions, see User-specific metric definition file (metrics_any-Prometheus-trend-name.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(i) Changing Ports (Optional)

■ Specifying the port number for scraping used by Fluentd

Specify the port number for scraping used by the fluent-plugin-prometheus plug-in in the log metrics definition file.

Specify the listening-port-number# for port in <source> described in the following example of changes.

Example of changes:

## Input
<worker worker-ids-used-for-the-log-metrics-feature>
  <source>
    @type prometheus
    bind 0.0.0.0
    port listening-port-number
    metrics_path /metrics
  </source>
</worker>
 
<worker worker-id>
<source>
(Omitted)
</source>
 
(Omitted below)
#:

The actual Listen port used by Fluentd (log metrics feature) depends on the worker_id of worker used by log metrics feature (the value specified in "id of worker" in log metrics definition file (fluentd_arbitrary name_logmetrics)), and is the port number as shown in the following formula.

24820 + worker_id

If log metrics feature uses 129 worker, the default port number is a sequence number from 24820 to 24948.

■ Changing the port number for scraping used by Prometheus

Change the port number for scraping defined in the Prometheus discovery configuration file to the listening-port-number+worker-id specified in Specifying the port number for scraping used by Fluentd.

Change the listening-port-number for targets in the following example of changes.

- targets:
  - name-of-monitored-host:listening-port-number+worker-id
(Omitted)
  labels:
(Omitted)

(j) Monitoring SAP system logging (Optional)

To monitor SAP system's system log information newly, perform Fluentd setting procedure described below along with Script exporter setting procedure described in 1.21.2(12)(d) Setting when executing SAP system log extract command (Optional).

Perform the following steps to newly monitor the system log information of SAP system.

  1. Creates the system log information monitoring definition file for SAP system.

    Create a text-formatted log file monitoring definition file by copying sample file (fluentd_sap_syslog_tail.conf). Refer to the File/Directory list in the Appendix A.4 JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide. Change the destination file name to "fluentd_log-monitoring-name_tail.conf".

    For details about text-formatted log file monitoring definition file descriptions, see text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edits the log monitoring target definition file (jpc_fluentd_common_list.conf).

    If you want to temporarily stop log monitoring of some monitor definition files, list the names of the monitor definition files in the log monitor target definition file and define them.

    For details about the log monitor target definition file descriptions, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. You do not need to edit the log monitoring target definition file if you have not already done so.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    For details on how to reflect, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

(k) Modify SAP system logging monitoring configuration (Optional)

For details about how to change the monitoring settings of SAP system's system log information, see 1.21.2(9)(c) Modifying the Monitoring Setup of the Text-Format Logging File (for Windows) (Optional).

(l) Remove SAP system logging monitoring configuration (Optional)

For details about deleting the monitoring configuration for SAP system's system log information, see 1.21.2(9)(d) Deleting the monitoring settings of the text log file (for Windows) (optional).

(m) Monitoring CCMS alerting for SAP system (Optional)

To newly monitor CCMS alert information in the SAP system, perform the following steps.

  1. Creates a CCMS alert-information monitoring definition file for SAP system.

    Create a text-formatted log file monitoring definition file by copying sample file (fluentd_sap_alertlog_tail.conf). Refer to the File/Directory list in the Appendix A.4 JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Overview and System Design Guide. Change the destination file name to "fluentd_log-monitoring-name_tail.conf".

    For details about text-formatted log file monitoring definition file descriptions, see text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  2. Edits the log monitoring target definition file (jpc_fluentd_common_list.conf).

    If you want to temporarily stop log monitoring of some monitor definition files, list the names of the monitor definition files in the log monitor target definition file and define them.

    For details about the log monitor target definition file descriptions, see Log monitoring target definition file (jpc_fluentd_common_list.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference. You do not need to edit the log monitoring target definition file if you have not already done so.

    For details about how to change configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

  3. Reflect in integrated operation viewer tree.

    For details on how to reflect, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).

(n) Modify SAP system CCMS alert information monitoring settings (Optional)

For details about how to change the SAP system CCMS alert information monitoring settings, see 1.21.2(9)(c) Modifying the Monitoring Setup of the Text-Format Logging File (for Windows) (Optional).

(o) Remove SAP system CCMS alert information monitoring settings (Optional)

For details about how to delete SAP system CCMS alert information monitoring settings, see 1.21.2(9)(d) Deleting the monitoring settings of the text log file (for Windows) (optional).

(10) Setting up scraping definitions

If you want features provided by the add-on programs to be scraped, provide the scraping definitions listed in the following table.

Table 1‒16: Scraping definitions for the features provided by the add-on programs

Feature provided by the add-on programs

OS that the add-on runs on

Exporter or target

Scraping definition

Windows performance data collection

Windows

Windows exporter

Definitions are not required.

Linux process data collection

Linux

Process exporter

AWS CloudWatch performance data collection

Windows and Linux

Yet another cloudwatch exporter

Azure Monitor performance data collection

Promitor

Log metrics

Fluentd

The definition must be provided to use the log metrics feature.

For details on what definition is needed, see 1.21.2(10)(a) Scraping definition for the log metrics feature.

UAP monitoring

Script exporter

A monitoring target script must be set up.

For details on what settings are needed, see 1.21.2(10)(b) Scraping definition for Script exporter.

(a) Scraping definition for the log metrics feature

A user-specific Exporter scrapes logs based on the scraping definition of the log metrics feature.

  • Create a user-specific discovery configuration file (required)

    Create a user-specific discovery configuration file (user_file_sd_config_any-name.yml) and define what should be monitored.

    For details on the user-specific discovery configuration file, see the section describing the applicable file in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on what should be defined for the log metrics feature, see the section describing sample files for the applicable file under 3.15.1(1)(l) Fluentd (Log metrics) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  • Set up scrape_configs in the Prometheus configuration file (required)

    Add the scrape_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

    For details on the Prometheus configuration file, see the section describing the applicable file in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

    For details on what should be defined for the log metrics feature, see the section describing sample files for the applicable file under 3.15.1(1)(l) Fluentd (Log metrics) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

(b) Scraping definition for Script exporter

You can specify a scraping definition by using the http_sd_config method by which you run all scripts defined in the Script exporter configuration file (jpc_script_exporter.yml) or the file_sd_config method by which you specify one of the scripts defined in the Script exporter configuration file (jpc_script_exporter.yml) as a params element of scrape_configs in the Prometheus configuration file (jpc_prometheus_server.yml). The default is the http_sd_config method.

For details on the Script exporter configuration file and the Prometheus configuration file, see the section describing the applicable file in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The following shows scraping definition examples.

  • Example of a scraping definition with the http_sd_config method

scrape_configs:
  - job_name: jpc_script_exporter
    http_sd_configs:
      - url: http://installation-host-name:port/discovery
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_script
      - target_label: jp1_pc_exporter
        replacement: JPC Script Exporter
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_trendname
        replacement: script_exporter
      - target_label: jp1_pc_multiple_node
        replacement: jp1_pc_exporter="{job='jpc_script.*',jp1_pc_multiple_node=''}"
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
      - target_label: jp1_pc_agent_create_flag
        replacement: false
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
installation-host-name

Specify the name of the host where Script exporter has been installed, with 1 to 255 characters other than control characters.

port

Specify the port number Script exporter is going to use.

any-category-name

Specify a category ID of the IM management node for the agent SID, with 1 to 255 characters other than control characters.

  • Example of a scraping definition with the file_sd_config method

scrape_configs:
# Example of running a script in a configuration file
  - job_name: any-scraping-job-name-1
    file_sd_configs:
      - files:
        - 'path-to-the-Script-exporter-discovery-configuration-file'
    metrics_path: /probe
    params:
      script: [scripts.name-in-the-Script-exporter-configuration-file]
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_nodelabel
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
 
# Example of running a script in a configuration file with additional arguments 1 and 2 added to the script
  - job_name: any-scraping-job-name-2
    file_sd_configs:
      - files:
        - 'path-to-the-Script-exporter-discovery-configuration-file'
    metrics_path: /probe
    params:
      script: [scripts.name-in-the-Script-exporter-configuration-file]
      argument-name-1: [argument-name-1-value]
      argument-name-2: [argument-name-2-value]
    relabel_configs:
      - source_labels: [__param_script]
        target_label: jp1_pc_nodelabel
      - target_label: jp1_pc_category
        replacement: any-category-name
      - target_label: jp1_pc_nodelabel
        replacement: Script metric collector(Script exporter)
    metric_relabel_configs:
      - source_labels: [jp1_pc_script]
        target_label: jp1_pc_nodelabel
      - regex: (jp1_pc_multiple_node|jp1_pc_script|jp1_pc_agent_create_flag)
        action: labeldrop
any-scraping-job-name

Specify a given scraping job name that is unique on the host, with 1 to 255 characters other than control characters.

path-to-the-Script-exporter-discovery-configuration-file

Specify the path to the Script exporter discovery configuration file (jpc_file_sd_config_script.yml).

any-category-name

Specify a category ID of the IM management node for the agent SID, with 1 to 255 characters other than control characters.

(11) Setting up container monitoring

You use different features and setup procedures to monitor different monitoring targets. The following table lists what feature you will use and where you can see the setup procedure for each monitoring target.

Monitoring target

Feature you use

See the setup procedure in

Red Hat OpenShift

User-specific Prometheus

Subsections following this table.

Kubernetes

Amazon Elastic Kubernetes Service (EKS)

Azure Kubernetes Service (AKS)

Azure's monitoring feature (Promitor)

AKS can be monitored by default.

1.21.2(8) Set up of Promitor

(a) Configuring the settings of scraping through user-specific Prometheus (required)

  • Red Hat OpenShift

    Settings are not required.

    An openshift-monitoring project is installed and a scraping setting is added during installation.

  • Kubernetes and Amazon Kubernetes Service (EKS)

    When Prometheus is not installed, or when Prometheus is installed but the scraping targets listed in the following table are not configured, add a scraping setting.

    Scraping target

    Data you can retrieve

    Metric you can collect

    kube-stat-metrics

    Status data on nodes, pods, and workloads

    See the section describing Key metric items of 3.15.4(2)(a) Red Hat OpenShift in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    node_exporter

    Node's performance data

    kubelet

    Pod's performance data

The subsequent steps are common to Red Hat OpenShift, Kubernetes, and Amazon Kubernetes Service (EKS).

(b) Configuring the settings for a connection (required)

Configure the remote write setting to collect information from user-specific Prometheus.

For details on how to modify the settings for each monitoring target, see 1.21.2(11)(c) Modifying the Prometheus settings (Red Hat OpenShift) and 1.21.2(11)(d) Modifying the Prometheus settings (Amazon Elastic Kubernetes Service (EKS)).

- global.external_labels section
  • jp1_pc_prome_hostname (required)

    Specify the name of the Prometheus host.

  • jp1_pc_prome_clustername (optional)

    Specify the name of the cluster.

    When this label is omitted, the system does not create an IM management node for the cluster.

(Example)

global:
external_labels:
  jp1_pc_prome_hostname: promHost
  jp1_pc_prome_clustername: myCluster
- remote_write section
  • Configure a connection destination

    Specify an endpoint of JP1/IM - Manager (Intelligent Integrated Management Base) as a remote write destination. Specify one endpoint and, in each section, enter (1) and (2) of the settings of the label required for container monitoring. When creating the cluster nodes, specify another endpoint and enter (3).

  • Configure labels necessary for container monitoring

    To give labels necessary for container monitoring, add the statement shown below to the write_relabel_configs section. It does not affect any local storage for user-specific Prometheus because it is relabeled during remote writing.

    When you monitor Red Hat OpenShift, add the following settings at the beggining of (1) Basic settings and (2) Settings for creating the pod nodes.

- source_labels: ['__name__']
regex: 'kube_.*'
target_label: instance
replacement: any-value-that-is-unique-within-the-cluster#

#: The value specified here is displayed in the tree as the host on which Kubernetes state metric collector(Kube state metrics) runs.

(1) Basic settings

  - source_labels: '[__name__]'
    regex: 'kube_job_status_failed|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
    action: 'keep'
  - source_labels: ['__name__','namespace']
    regex: '(kube_pod_|kube_job_|container_).*;(.*)'
    target_label: jp1_pc_nodelabel
    replacement: $2
  - source_labels: ['__name__','node']
    regex: 'kube_node_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','daemonset']
    regex: 'kube_daemonset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','deployment']
    regex: 'kube_deployment_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','replicaset']
    regex: 'kube_replicaset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','statefulset']
    regex: 'kube_statefulset_.*;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__','owner_kind','owner_name']
    regex: 'kube_job_owner;CronJob;(.*)'
    target_label: jp1_pc_nodelabel
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_nodelabel
    replacement: Linux metric collector(Node exporter)
  - source_labels: ['__name__']
    regex: '(kube_pod_|kube_job_|container_).*;(.*)'
    target_label: jp1_pc_module
    replacement: kubernetes/Namespace
  - source_labels: ['__name__']
    regex: 'kube_node_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/Node
  - source_labels: ['__name__']
    regex: 'kube_daemonset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/DaemonSet
  - source_labels: ['__name__']
    regex: 'kube_deployment_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/Deployment
  - source_labels: ['__name__']
    regex: 'kube_replicaset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/ReplicaSet
  - source_labels: ['__name__']
    regex: 'kube_statefulset_.*'
    target_label: jp1_pc_module
    replacement: kubernetes/StatefulSet
  - source_labels: ['__name__','owner_kind']
    regex: 'kube_job_owner;CronJob'
    target_label: jp1_pc_module
    replacement: kubernetes/CronJob
  - source_labels: ['__name__']
    regex: 'kube_.*|container_.*'
    target_label: jp1_pc_trendname
    replacement: kubernetes
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_trendname
    replacement: node_exporter
  - source_labels: ['__name__']
    regex: 'kube_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Kube state metrics
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Node exporter
  - source_labels: ['__name__']
    regex: 'container_.*'
    target_label: jp1_pc_exporter
    replacement: JPC Kubelet
  - source_labels: ['__name__']
    regex: 'kube_.*'
    target_label: job
    replacement: jpc_kube_state
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: job
    replacement: jpc_kube_node
  - source_labels: ['__name__']
    regex: 'container_.*'
    target_label: job
    replacement: jpc_kubelet
  - source_labels: ['__name__']
    regex: 'node_.*'
    target_label: jp1_pc_category
    replacement: platform
  - source_labels: ['job','instance']
    regex: 'jpc_kube_state;([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes state metric collector(Kube state metrics)
  - source_labels: ['job','instance']
    regex: 'jpc_kubelet;([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes resource metric collector(Kubelet)
  - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|failure_type|scope'
    action: 'labelkeep'

(2) Settings for creating the pod nodes

  - source_labels: ['__name__']
    regex: 'kube_pod_owner|kube_pod_status_phase|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes'
    action: 'keep'
  - source_labels: ['pod']
    target_label: jp1_pc_nodelabel
  - target_label: jp1_pc_module
    replacement: kubernetes/Pod
  - target_label: jp1_pc_trendname
    replacement: kubernetes
  - target_label: jp1_pc_exporter
    replacement: JPC Kube state metrics
  - target_label: job
    replacement: jpc_kube_state
  - source_labels: ['instance']
    regex: '([^:]+):?(.*)'
    target_label: jp1_pc_remote_monitor_instance
    replacement: ${1}:Kubernetes state metric collector(Kube state metrics)
  - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|failure_type|scope'

(3) Settings for creating the cluster nodes

- source_labels: ['__name__','jp1_pc_prome_clustername']
      regex: '(container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes);(.+)'
      action: 'keep'
    - source_labels: ['jp1_pc_prome_clustername']
      target_label: jp1_pc_nodelabel
    - target_label: jp1_pc_module
      replacement: kubernetes/Cluster
    - target_label: jp1_pc_trendname
      replacement: kubernetes
    - target_label: jp1_pc_exporter
      replacement: JPC Kubelet
    - target_label: job
      replacement: jpc_kubelet
    - source_labels: ['instance']
      regex: '([^:]+):?(.*)'
      target_label: jp1_pc_remote_monitor_instance
      replacement: ${1}:Kubernetes state metric collector(Kubelet)
    - regex: '__.+__|jp1_pc_prome_hostname|jp1_pc_prome_clustername|jp1_pc_nodelabel|jp1_pc_trendname|jp1_pc_module|jp1_pc_exporter|jp1_pc_remote_monitor_instance|instance|job|cronjob|namespace|schedule|concurrency_policy|daemonset|deployment|condition|status|job_name|owner_kind|owner_name|owner_is_controller|reason|replicaset|statefulset|revision|phase|node|kernel_version|os_image|container_runtime_version|kubelet_version|kubeproxy_version|pod_cidr|provider_id|system_uuid|internal_ip|key|value|effect|resource|unit|pod|host_ip|pod_ip|created_by_kind|created_by_name|uid|priority_class|host_network|ip|ip_family|image_image_id|image_spec|container_id|container|type|persistentvolumeclaim|label_.+_LABEL|id|name|device|major|minor|operation|cpu|failure_type|scope'
      action: labelkeep

(c) Modifying the Prometheus settings (Red Hat OpenShift)

- Prerequisites

  • As a user with the cluster-admin role, you can access the cluster.

  • OpentShift CLI (oc) has been installed.

- Procedure

  1. Check whether a ConfigMap object is created.

    $ oc -n openshift-monitoring get configmap cluster-monitoring-config
  2. If the ConfigMap object is not created, create a new file.

    $ vi cluster-monitoring-config.yaml
  3. If the ConfigMap object is created, edit a cluster-monitoring-config object in the openshift-monitoring project.

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  4. Add the settings in camel case to data/config.yaml/prometheusK8s.

    (Example)

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          externalLabels:
            jp1_pc_prome_hostname: promHost
            jp1_pc_prome_clustername: myCluster
          remoteWrite:
          - url: http://host-name-of-JP1/IM - Manager (Intelligent Integrated Management Base):20703/im/api/v1/trendData/write
            writeRelabelConfigs:
            - sourceLabels: '[__name__]'
              regex: 'kube_job_status_failed|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
              action: keep
      - source_labels: ['__name__','namespace']
        regex: '(kube_pod_|kube_job_|container_).*;(.*)'
        target_label: jp1_pc_nodelabel
        replacement: $2
      - source_labels: ['__name__','node']
        regex: 'kube_node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','daemonset']
        regex: 'kube_daemonset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','deployment']
        regex: 'kube_deployment_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','replicaset']
        regex: 'kube_replicaset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','statefulset']
        regex: 'kube_statefulset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','owner_name']
        regex: 'kube_job_owner;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','instance']
        regex: 'node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_trendname
        replacement: kube_state_metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_trendname
        replacement: node_exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_trendname
        replacement: kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kube state metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Node exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: job
        replacement: jpc_kube_state
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: job
        replacement: jpc_node
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: job
        replacement: jp1_kubelet
  5. Save the file and apply the changes to the ConfigMap object.

    $ oc apply -f cluster-monitoring-config.yaml

(d) Modifying the Prometheus settings (Amazon Elastic Kubernetes Service (EKS))

- Procedure

  1. Create a yml file with a given name (example: my_prometheus_values.yml) and add the settings to the server section.

    • Settings in external_labels

      Add them to the global.external_labels section.

    • Settings in remote_write

      Add them to the remoteWrite section.

    (Example)

    server:
      global:
        external_labels:
          jp1_pc_prome_hostname: promHost
          jp1_pc_prome_clustername: myCluster
      remoteWrite:
        - url: http://host-name-of-JP1/IM - Manager (Intelligent Integrated Management Base):20703/im/api/v1/trendData/write
            write_relabel_configs:
            - sourceLabels: '[__name__]'
              regex: 'kube_job_status_failed|kube_job_owner|kube_pod_status_phase|kube_daemonset_status_desired_number_scheduled|kube_daemonset_status_current_number_scheduled|kube_deployment_spec_replicas|kube_deployment_status_replicas_available|kube_replicaset_spec_replicas|kube_replicaset_status_ready_replicas|kube_replicaset_owner|kube_statefulset_replicas|kube_statefulset_status_replicas_ready|kube_node_status_condition|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_writes_bytes_total|container_memory_working_set_bytes|container_spec_memory_limit_bytes|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filesystem_avail_bytes|node_filesystem_files|node_filesystem_files_free|node_filesystem_free_bytes|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_memory_Active_file_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_Inactive_file_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SwapFree_bytes|node_memory_SwapTotal_bytes|node_netstat_Icmp6_InMsgs|node_netstat_Icmp_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_OutMsgs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutSegs|node_netstat_Udp_InDatagrams|node_netstat_Udp_OutDatagrams|node_network_flags|node_network_iface_link|node_network_mtu_bytes|node_network_receive_errs_total|node_network_receive_packets_total|node_network_transmit_colls_total|node_network_transmit_errs_total|node_network_transmit_packets_total|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout'
              action: keep
      - source_labels: ['__name__','namespace']
        regex: '(kube_pod_|kube_job_|container_).*;(.*)'
        target_label: jp1_pc_nodelabel
        replacement: $2
      - source_labels: ['__name__','node']
        regex: 'kube_node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','daemonset']
        regex: 'kube_daemonset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','deployment']
        regex: 'kube_deployment_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','replicaset']
        regex: 'kube_replicaset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','statefulset']
        regex: 'kube_statefulset_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','owner_name']
        regex: 'kube_job_owner;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__','instance']
        regex: 'node_.*;(.*)'
        target_label: jp1_pc_nodelabel
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_trendname
        replacement: kube_state_metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_trendname
        replacement: node_exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_trendname
        replacement: kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kube state metrics
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Node exporter
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: jp1_pc_exporter
        replacement: JPC Kubelet
      - source_labels: ['__name__']
        regex: 'kube_.*'
        target_label: job
        replacement: jpc_kube_state
      - source_labels: ['__name__']
        regex: 'node_.*'
        target_label: job
        replacement: jpc_node
      - source_labels: ['__name__']
        regex: 'container_.*'
        target_label: job
        replacement: jp1_kubelet
  2. Apply the changes.

    helm upgrade prometheus-chart-name prometheus-community/prometheus -n prometheus-namespace -f  my_prometheus_values_yaml

(e) Configuring scraping targets (optional)

- Change monitoring targets

If you want to monitor only a part of the monitoring targets in a user environment with JP1/IM, specify the monitoring targets in the write_relabel_configs section. See the following examples.

(Example 1) Specifying a whitelist of specific resources

  - source_labels: ['__name__','pod']
    regex: '(kube_pod_|container_).*;coredns-.*|prometheus'
    action: 'keep'

(Example 2) Specifying a blacklist of all resources

  - source_labels: ['jp1_pc_nodelabel']
    regex: 'coredns-.*|prometheus'
    action: 'drop'

In addition, to monitor metrics that have already been collected in a different aggregation type, add the remote_write section and define it as the type to be monitored.

- Change monitoring metrics

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the metric definition files.

The files you need to edit are as follows:

  • Node exporter metric definition file (metrics_node_exporter.conf)

  • Container monitoring metric definition file (metrics_kubernetes.conf)

For details on these metric definition files, see the sections describing the applicable files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(f) Configuring the system node definition file (imdd_systemnode.conf) (required)

To create a system node described in (e) Tree format in the JP1/Integrated Management 3 - Manager Overview and System Design Guide 3.15.6(1)(i) Creating IM management nodes (__configurationGet method), edit the system node definition file (imdd_systemnode.conf) to configure the setting items listed in the table below. You can specify any values to the items that are not listed in the table.

Table 1‒17: Settings in the system node definition file (imdd_systemnode.conf)

Item

Value

displayname

This specifies the name of the Kubernetes component.

type

It must be specified in uppercase characters as follows:

Kubernetes-Kubernetes-component-name

Kubernetes-component-name is equivalent to one of the names under the Component name column of the section describing the component names monitored by Kubernetes in 3.15.4(2)(b) Kubernetes in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

name

This is specified as:

[{".*":"regexp"}]

The following table shows a setting example of the system node definition file when you create system management nodes for Kubernetes components that are found by default as monitoring targets in the container monitoring metric definition file.

Table 1‒18: Setting example of the system node definition file (imdd_systemnode.conf)

Item

displayName

type

name

Clusters

JP1PC-KUBERNETES-CLUSTER

[{".*":"regexp"}]

Nodes

JP1PC-KUBERNETES-NODE

[{".*":"regexp"}]

Namespaces

JP1PC-KUBERNETES-NAMESPACE

[{".*":"regexp"}]

Deployments

JP1PC-KUBERNETES-DEPLOYMENT

[{".*":"regexp"}]

DaemonSets

JP1PC-KUBERNETES-DAEMONSET

[{".*":"regexp"}]

ReplicaSets

JP1PC-KUBERNETES-REPLICASET

[{".*":"regexp"}]

StatefulSets

JP1PC-KUBERNETES-STATEFULSET

[{".*":"regexp"}]

CronJobs

JP1PC-KUBERNETES-CRONJOB

[{".*":"regexp"}]

Pods

JP1PC-KUBERNETES-POD

[{".*":"regexp"}]

The following shows how the items in the above table and Kubernetes at a higher level can be defined in the system node definition file.

{
  "meta":{
    "version":"2"
  },
  "allSystem":[
    {
      "id":"kubernetes",
      "displayName":"Kubernetes",
      "children":[
        {
          "id":"cluster",
          "displayName":"Clusters",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-CLUSTER",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"namespace",
          "displayName":"Namespaces",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-NAMESPACE",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"node",
          "displayName":"Nodes",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-NODE",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"deployment",
          "displayName":"Deployments",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-DEPLOYMENT",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"daemonset",
          "displayName":"DaemonSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-DAEMONSET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"replicaset",
          "displayName":"ReplicaSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-REPLICASET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"statefulset",
          "displayName":"StatefulSets",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-STATEFULSET",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"cronjob",
          "displayName":"CronJobs",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-CRONJOB",
              "name":[{".*":"regexp"}]
            }
          ]
        },
        {
          "id":"pod",
          "displayName":"Pods",
          "objectRoot":[
            {
              "type":"JP1PC-KUBERNETES-POD",
              "name":[{".*":"regexp"}]
            }
          ]
        }
      ]
    }
  ]
}

With the system node definition file specified, an IM management node is displayed under the system node that has the corresponding Kubernetes component name, when the jddcreatetree is run.

For details on the system node definition file, see System node definition file (imdd_systemnode.conf) of JP1/IM - Manager in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(12) Editing the Script exporter definition file

(a) Specifying scripts as monitoring targets (required)

- Edit the Script exporter configuration file (jpc_script_exporter.yml)

Edit the Script exporter configuration file (jpc_script_exporter.yml) to define which scripts are to be monitored.

For details on the Script exporter configuration file, see Script exporter configuration file (jpc_script_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(b) Modifying monitoring metrics (optional)

- Edit the Prometheus configuration file (jpc_prometheus_server.yml)

If you want to add metrics collected from the scripts, add them to the metric_relabel_config section in the Prometheus configuration file (jpc_prometheus_server.yml).

For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference).

scrape_configs:
  - job_name: jpc_script_exporter
      ...
    metric_relabel_configs:
      - source_labels:['__name__']
        regex: 'script_success|script_duration_seconds|script_exit_code[Add metrics here]'
- Edit the Script exporter metric definition file (metrics_script_exporter.conf)

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Script exporter metric definition file (metrics_script_exporter.conf).

For details on the Script exporter metric definition file, see Script exporter metric definition file (metrics_script_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(c) Changing Ports (Optional)

The listen port used by Script exporter is specified in --web.listen-address option of the script_exporter command.

For details about how to change the options of script_exporter command, see 1.21.2(1)(c) Change command-line options (for Windows), 2.19.2(1)(c) Change command-line options (for Linux). For details about --web.listen-address option, see If you want to change command line options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20722". If port number is changed, review setup of the firewall and prohibit accessing from outside.

Notes:
  • When specifying the host name for this option, the same host name must be set for targets in the Script exporter discovery configuration file (jpc_file_sd_config_script.yml) on the same host. If you specify with http_sd_config, also change url.

  • When specifying an IP address for this option, the host name that is resolved to the IP address specified for this option must be set for targets in the Process exporter discovery configuration file (jpc_file_sd_config_process.yml) on the same host. If you specify with http_sd_config, also change url.

(d) Setting when executing SAP system log extract command (Optional)

If you use Script exporter to execute the SAP system log extract command, configure Script exporter configuration file settings using sample file (jpc_script_exporter_sap.yml) of SAP system monitoring.

For information about sample file of Script exporter configuration file for Sample file of Script exporter configuration file for SAP system monitoring (jpc_script_exporter_sap.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details on how to configure the settings, see Sample file of Script exporter configuration file for SAP system monitoring (jpc_script_exporter_sap.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference and 1.21.2(12) Editing the Script exporter definition file.

(13) Specifying a listening port number and listening address (optional)

If you neither change the default port numbers nor limit IP addresses to listen to, you can skip this step.

For details on what you should modify if you want to change port numbers or IP addresses to listen to, see 1.21.2 Settings of JP1/IM - Agent (for Windows) and 2.19.2 Settings of JP1/IM - Agent (for Linux).

(14) Firewall's Setup (for Windows) (mandatory)

You must setup the firewall to restrict external accessibility as follows:

Table 1‒19: Firewall Setup

Port

Firewall Setup

Imagent port

Access from outside is prohibited.

Imagentproxy port

Access from outside is prohibited.

Imagentaction port

Access from outside is prohibited.

Alertmanager port

Access from outside is prohibited.

However, if you want to monitor Alertmanager with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In this case, consider security measures such as limiting source IP address.

Prometheus_server port

Access from outside is prohibited.

However, if you want to monitor Prometheus server with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In this case, consider security measures such as limiting source IP address.

Windows_exporter port

Access from outside is prohibited.

Node_exporter port

Process_exporter port

Ya_cloudwatch_exporter port

Promitor_scraper port

Promitor_resource_discovery port

Blackbox_exporter port

Script_exporter port

Fluentd port

(15) Setup of integrated agent process alive monitoring (for Windows) (optional)

You monitor integrated agent processes in the following ways:

(a) External shape monitoring by other-host Blackbox exporter

Prometheus server and Alertmanager services monitors from Blackbox exporter of integrated agent running on other hosts. The following tables show URL to be monitored.

For details about how to add HTTP monitor of Blackbox exporter, see 1.21.2(6)(c) Add, change, or Delete the monitoring target (for Windows) (mandatory). For details about setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

Table 1‒20: URL monitored by HTTP monitoring of Blackbox exporter

Service

URL to monitor

Prometheus server

http://Host-name-of-integrated-agent:Port-number-of-Prometheus-server/-/healthy

Alertmanager

http://Host-name-of-integrated-agent:Port-number-of-Alertmanager/-/healthy

The following is a sample alert-definition that you want to monitor with HTTP Monitor for Blackbox exporter.

groups:
  - name: service_healthcheck
    rules:
    - alert: jp1_pc_prometheus_healthcheck
      expr: probe_success{instance=~".*:20713/-/healthy"} == 0
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "probe_success"
      annotations:
        jp1_pc_firing_description: "Communication to Prometheus server failed. "
        jp1_pc_resolved_description: "Communication to Prometheus server was successful. "
    - alert: jp1_pc_alertmanager_healthcheck
      expr: probe_success{instance=~".*:20714/-/healthy"} == 0
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "probe_success"
      annotations:
        jp1_pc_firing_description: "Communication to Alertmanager failed. "
        jp1_pc_resolved_description: "Communication to Alertmanager was successful. "

(b) Alive Monitoring Processes by Windows exporter

Imagentproxy service, imagentaction service, Fluentd service, and Windows servicing program are monitored by Windows exporter's process monitoring activity information. The processes to be monitored are described in the following table.

For details about Setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

Table 1‒21: Processes monitored by the Windows exporter

Service

Processes to monitor

Monitored Name

jpc_imagent_service#

Agent-path\bin\jpc_imagent_service.exe

Monitoring target 1:imagent

Monitoring target 2:imagent

jpc_imagentproxy_service#

Agent-path\bin\jpc_imagentproxy_service.exe

Monitoring target 1:imagentproxy

Monitoring target 2:imagentproxy

jpc_imagentaction_service#

Agent-path\bin\jpc_imagentaction_service.exe

Monitoring target 1:imagentaction

Monitoring target 2:imagentaction

jpc_prometheus_server_service#

Agent-path\bin\jpc_prometheus_server_service.exe

Monitoring target 1:prometheus

Monitoring target 2:prometheus_server

jpc_alertmanager_service#

Agent-path\bin\jpc_alertmanager_service.exe

Monitoring target 1:alertmanager

Monitoring target 2:alertmanager

jpc_windows_exporter_service#

Agent-path\bin\jpc_windows_exporter_service.exe

Monitoring target 1:Windows metric collector(Windows exporter)

Monitoring target 2:windows_exporter

jpc_blackbox_exporter_service#

Agent-path\bin\jpc_blackbox_exporter_service.exe

Monitoring target 1:RM Synthetic metric collector(Blackbox exporter)

Monitoring target 2:blackbox_exporter

jpc_ya_cloudwatch_exporter_service#

Agent-path\bin\jpc_ya_cloudwatch_exporter_service.exe

Monitoring target 1:RM AWS metric collector(Yet another cloudwatch exporter)

Monitoring target 2:ya_cloudwatch_exporter

jpc_fluentd_service#

Agent-path\bin\jpc_fluentd_service.exe

  • When to Use fluentd

    Monitoring target 1:fluentd_win Log trapper (Fluentd)

    Monitoring target 2:fluentd

  • When using only log metrics feature

    Monitoring target 1:fluentd_prome_win Log trapper (Fluentd)

  • Monitoring target 2:fluentd

jpc_script_exporter_service#

Agent-path\bin\jpc_script_exporter_service.exe

Monitoring target 1: Script exporter

Monitoring target 2: script_exporter

jpc_promitor_scraper_service#

Agent-path\bin\jpc_promitor_scraper_service.exe

Monitoring target 1: RM Promitor

Monitoring target 2: promitor_scraper

jpc_promitor_resource_discovery_service#

Agent-path\bin\jpc_promitor_resource_discovery_service.exe

Monitoring target 1: RM Promitor

Monitoring target 2: promitor_resource_discovery

#

Indicates Windows service program.

Here is a sample alert-definition that Windows exporter monitors:

groups:
 - name: windows_exporter
  rules:
   - alert: jp1_pc_procmon_ Monitor target 1
     expr: expr: absent (windows_process_start_time {instance="imahost:20717", job = "jpc_windows", jp1_pc_exporter= "JPC Windows exporter", jp1_pc_nodelabel= "jpc_monitor target 2_service", process = "jpc_monitor target 2_service"}) = 1.
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "windows_process_start_time"
      annotations:
        jp1_pc_firing_description: "The number of processes was less than the threshold Value (1). "
        jp1_pc_resolved_description: "The number of processes exceeded the threshold Value (1). "
  • Specify integrated agent host name in imahost part. Specify port number of Windows exporter in 20717 part

  • For monitoring target 1 and monitoring target 2, specify the monitoring target name of Table 1-23 Processes monitored by the Windows exporter.

  • If you specify more than one alert definition, repeat setup multiple times after the line starting with " - alert:".123

(c) Monitoring with Prometheus server up metric

Windows exporter service, Blackbox exporter service, and Yet another cloudwatch exporter service are monitored through Prometheus server alert-monitoring. For details about setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

Here is a sample alert-definition that monitors up metric:

groups:
  - name: exporter_healthcheck
    rules:
    - alert: jp1_pc_exporter_healthcheck
      expr: up{jp1_pc_remote_monitor_instance=""} == 0 or label_replace(sum by (jp1_pc_remote_monitor_instance,jp1_pc_exporter) (up{jp1_pc_remote_monitor_instance!=""}), "jp1_pc_nodelabel", "${1}", "jp1_pc_remote_monitor_instance", "^[^:]*:([^:]*)$") == 0
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_metricname: "up"
      annotations:
        jp1_pc_firing_description: " Communication to Exporter failed. "
        jp1_pc_resolved_description: " Communication to Exporter was successful. "

(16) Creation and import of IM management node tree data (for Windows) (mandatory)

Follow the steps below to create and import IM management node tree.

  1. If you add a new integrated agent host or change hostname of integrated agent host, start service JP1/IM - Agent control base on that host.

  2. Start add-on program and the integrated agent control base service in the same host, when you add an new add-on program or change program settings leads to configuration changes.

  3. After all integrated agent host have started the corresponding service in steps 1 and 2, wait for one minute# after starting the service.

    #: If value of the scrape_interval of Prometheus configuration file (jpc_prometheus_server.yml) has been changed, wait for that value time.

  4. Perform the steps in Integrated manager host.

    For details about the procedure, see steps 2 to 5 in 1.19.3(1)(c)Creation and import of IM management node tree data (for Windows) (mandatory).

(17) Security-product exclusion Setup (for Windows) (optional)

If you are deploying antivirus software or security products, setup the following directories to exclude them:

(18) Notes on updating the difinition file (for Windows)

If you restart the service of JP1/IM - Agent to reflect the updated content of definition files, monitoring stops during restart of the service is ongoing.