Hitachi

JP1 Version 13 JP1/Integrated Management 3 - Manager Configuration Guide


2.19.2 Settings of JP1/IM - Agent

Organization of this subsection

(1) Common way for setting

(a) Edit the configuration files (for Linux)

See 1.21.2(1)(a) Edit the configuration files (for Windows).

(b) Changing unit definition file (for Linux only)

The following lists unit definition file storage locations and filenames:

  • Storage destination: /usr/lib/systemd/system/

  • File name: jpc_ Service name.service

To change unit definition file, follow these steps:

  1. Login to integrated agent host.

  2. Stop JP1/IM - Agent service.

  3. Edit unit definition file.

  4. Execute the following command to reflect the unit definition.

    # systemctl daemon-reload
  5. Start JP1/IM - Agent service.

(c) Change command-line options (for Linux)

On ExecStart line of unit definition file, change the command-line options.

For editing methods, see 2.19.2(1)(b) Changing unit definition file (for Linux only).

(2) Setup for JP1/IM agent control base

(a) Change Integrated manager to connect to (for Linux) (optional)

See 1.21.2(2)(a) Change Integrated manager to connect to (for Windows) (optional).

(b) Changing Ports (for Linux) (optional)

See 1.21.2(2)(b) Change the port (for Windows) (optional).

(c) Deploy a CA certificate (for Linux) (optional)

See 1.21.2(2)(c) Place CA certificate (for Windows) (optional).

(d) Modify settings related to Action Execution (on Linux) (optional)

See 1.21.2(2)(d) Modify settings related to Action Execution (for Windows) (optional).

(e) Setup the proxy authentication's authentication ID and Password (optional)

See 1.21.2(2)(e) Setup the proxy authentication's authentication ID and Password (for Windows) (optional).

(f) Change the user of Action Execution (for Linux) (required)

Change action.username in imagent configuration file (jpc_imagent.json).

For setup procedure, see 2.19.2(1)(a) Edit the configuration files (for Linux).

(g) Configuring the event-forwarding relay function (for Linux) (optional)

See 1.21.2(2)(g) Configuring the event-forwarding relay function (for Windows) (optional).

(h) Setting the data delivery function to multiple manager hosts (for Linux) (optional)

■ New setting

  1. Shut down JP1/IM - Agent servicing.

    The service key of the service to be stopped is shown below.

    • jpc_imagent

    • jpc_imagentaction

    • jpc_imagentproxy

    • jpc_alertmanager

    • jpc_prometheus_server

    • jpc_fluentd

  2. Create a directory for the secondary of JP1/IM agent management base.

    Create the following directory as the directory for the secondary of JP1/IM agent management base:

    • /opt/jp1ima/tmp/download-imagent-group-identifier

    • /opt/jp1ima/tmp/upload-imagent-group-identifier

    • /opt/jp1ima/tmp/jbsfwd-imagent-group-identifier

    • /opt/jp1ima/logs/imagent-imagent-group-identifier

    • /opt/jp1ima/logs/imagentproxy-imagent-group-identifier

    • /opt/jp1ima/logs/imagentaction-imagent-group-identifier

    To create the above directory, use the following command:

    mkdir-m 700 directory
    chown-R root:root directory
  3. Create a unit definition file for JP1/IM agent management base secondary.

    Copy the source unit definition file at the bottom of /usr/lib/systemd/system and rename it to the destination filename.

    The owner and owner group of the created unit definition file must have root, accessibility of 644.

    Source file name

    Destination file name

    jpc_imagent.service

    jpc_imagent-imagent-group-identifier.service

    jpc_imagentproxy.service

    jpc_imagentproxy-imagent-group-identifier.service

    jpc_imagentaction.service

    jpc_imagentaction-imagent-group-identifier.service

  4. Configure settings for JP1/IM agent control base secondary.

    See 3.15.7(5)(d) configuration file of JP1/IM agent control base in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  5. Register a secret for the secondary of JP1/IM agent control base.

    Obtain and register initial secret of JP1/IM - Manager to which the secondary JP1/IM agent control base connects.

    If you connect to JP1/IM - Manager through a HTTP proxy server that requires password-based HTTP authentication (Basic authentication), register HTTP proxy password as secret.

    For the key of initial secret to register and the key of HTTP proxy password, see 3.15.7(5)(e) Keys for initial secret, client secret, and HTTP proxy password in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  6. Perform settings other than JP1/IM agent control base.

    See 3.15.7(5)(f) Send settings for programs other than JP1/IM agent control base in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  7. Configure the service startup settings.

    See 3.15.7(5)(c) Configuring secondary service startup in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

  8. Refresh systemd.

    Run the following command:

    systemctl daemon-reload
  9. Run systemctl list-unit-files and activate the service by running the following command for a secondary service whose service status is masked:

    /opt/jp1ima/tools/jpc_service -on service-key

    For the service key, use the service key of JP1/IM - Agent in Linux described in 10.1 JP1/IM - Agent Service in the JP1/Integrated Management 3 - Manager Administration Guide with the -imagent-group-identifier.

  10. Start JP1/IM - Agent servicing.

    The service key of the service to be stopped is shown below.

    • jpc_imagent

    • jpc_imagentaction

    • jpc_imagentproxy

    • jpc_imagent-imagent-group-identifier

    • jpc_imagentaction-imagent-group-identifier

    • jpc_imagentproxy-imagent-group-identifier

    • jpc_alertmanager

    • jpc_prometheus_server

    • jpc_fluentd

  11. Set auto start at OS startup.

    For details about enabling auto start, see 2.19.1(2)(a) Enable for Auto-start.

(i) Deactivating the data delivery function to multiple manager hosts (for Linux) (optional)

The following describes the procedure for configuring the data delivery function to multiple manager hosts when the data delivery function is no longer used.

Perform one of the following depending on your operating environment.

  • Remove only the data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent

  • Uninstall JP1/IM - Agent itself.

■ To remove only the data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent

  1. Stop all JP1/IM - Agent servicing.

    If the service is running, execute the following command to stop it.

    /opt/jp1ima/tools/jpc_service_stop -s all
  2. Enable the secondary service.

    Enable the secondary services for JP1/IM agent control base by running the following command:

    /opt/jp1ima/tools/jpc_service -off service-key

    For how to enable the service, see step 2 and step 4 in 2.19.1(1)(a) Enable add-on programs.

    For the service key, use the service key of JP1/IM - Agent in Linux described in 10.1 Service of JP1/IM - Agent in the JP1/Integrated Management 3 - Manager Administration Guide with the -imagent-group-identifier.

  3. Delete the secret that you created to use the data delivery function to multiple manager hosts.

    Remove the secret for JP1/IM agent control base secondary you registered in step 5 of 2.19.2(2)(h) Setting the data delivery function to multiple manager hosts (for Linux) (optional).

    For the key of the secret to be deleted, see 3.15.7(5)(e) initial secret, client secret, and HTTP Proxy Password Keys in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    This step is not necessary for secrets that have already been deleted.

  4. Restores the settings for using the data delivery function to multiple manager hosts.

    2.19.2(2)(h) Setting the data delivery function to multiple manager hosts (for Linux) (optional). Restores the settings for the secondary JP1/IM agent control base that were set in step 4.

  5. Delete the files copied to use the data delivery function to multiple manager hosts.

    Remove unit definition file for the secondary JP1/IM agent management base that you copied in step 3 of 2.19.2(2)(h) Setting the data delivery function to multiple manager hosts (for Linux) (optional).

  6. Delete the directory created to use the data delivery function to multiple manager hosts.

    After saving the log and other required files, delete the directory for the secondary JP1/IM agent management base created in step 2 of 2.19.2(2)(h) Setting the data delivery function to multiple manager hosts (for Linux) (optional).

  7. Refresh systemd.

    Run the following command:

    systemctl daemon-reload
  8. Deletes integrated agent registered on JP1/IM - Manager to which the disabled secondary service is connected.

    For details on deleting integrated agent data, see 2.2.1 List of Integrated Agents window in the JP1/Integrated Management 3 - Manager GUI Reference.

■ When uninstalling JP1/IM - Agent

Follow the procedure from step 3 to step 6 in To remove only the data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent above between steps 1 and 2 in 2.23.1(4) Steps for uninstalling JP1/IM - Agent.

Follow step 8 in To remove only data delivery function to multiple manager hosts without uninstalling JP1/IM - Agent above.

(3) Setup of Prometheus server

(a) Changing Ports (for Linux) (optional)

The listen port used by Prometheus server is specified in --web.listen-address option of prometheus command.

For details about how to change prometheus command options, see 2.19.2(1)(c) Change command-line options (for Linux). For details of --web.listen-address option, see If you want to change command line options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20713". If port number is changed, review setup of the firewall and prohibit accessing from outside. However, if you want to monitor Prometheus server with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In such cases, consider security measures such as limiting the source IP address as required.

(b) Add the alert definition (for Linux) (optional)

See 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

(c) Add a Blackbox exporter scrape job (for Linux) (optional)

See 1.21.2(3)(c) Add Blackbox exporter scrape job (for Windows) (optional).

(d) Add a user-defined Exporter scrape job (for Linux) (optional)

See 1.21.2(3)(d) Add user-defined Exporter scrape job (for Windows) (optional).

(e) Change RemoteWrite destination (for Linux) (optional)

See 1.21.2(3)(e) Changing Remote Write destination (for Windows) (optional).

(f) Configuring service monitor settings (for Linux) (optional)

To use the service monitoring function in an environment where the version is upgraded and installed from JP1/IM - Agent 13-00 to 13-01 or later, configure the following settings. This setting is not required if JP1/IM - Agent 13-01 or later is newly installed.

■ Editing Prometheus configuration file (jpc_prometheus_server.yml)

If "node_systemd_unit_state" is not set to Keep metric in the metric_relabel_configs settings of the jpc_node scrape job, add the settings. Also, if the same metric_relabel_configs configuration does not have a relabel config for the "node_systemd_unit_state" metric, add the configuration. Add the underlined settings as follows.

(Omitted)
scrape_configs:
(Omitted)
  - job_name: 'jpc_node'
(Omitted)
    metric_relabel_configs:
      - source_labels: ['__name__']
        regex: 'node_network_receive_bytes_total|node_network_transmit_bytes_total|-- Omitted --|node_vmstat_pswpin|node_vmstat_pswpout|node_systemd_unit_state'
        action: 'keep'
      - source_labels: ['__name__']
        regex: 'node_systemd_unit_.*'
        target_label: 'jp1_pc_trendname'
        replacement: 'node_exporter_service'
      - source_labels: ['__name__']
        regex: 'node_systemd_unit_.*'
        target_label: 'jp1_pc_category'
        replacement: 'service'
      - source_labels: ['__name__','name']
        regex: 'node_systemd_unit_.*;(.*)'
        target_label: 'jp1_pc_nodelabel'
        replacement: ${1}
      - regex: jp1_pc_multiple_node
        action: labeldrop

■ Editing Node exporter discovery configuration file (jpc_file_sd_config_node.yml)

If the jp1_pc_multiple_node is not set, add the underlined settings as follows.

- targets:
  - hostname:20716
  labels:
    jp1_pc_exporter: JPC Node exporter
    jp1_pc_category: platform
    jp1_pc_trendname: node_exporter
    jp1_pc_multiple_node: "{__name__=~'node_systemd_unit_.*'}"

(g) Set when the IM management node label name (jp1_pc_nodelabel value) exceeds the upper limit (for Linux) (optional)

See 1.21.2(3)(g) Configure the settings when the label name (jp1_pc_nodelabel value) of the IM management node exceeds the upper limit (for Windows) (optional).

(h) Setting for executing the SAP system log extract command using Script exporter (for Linux) (optional)

See 1.21.2(3)(h) Setting for executing the SAP system log extract command using Script exporter (for Windows) (optional).

(i) Add a VMware exporter scrape job (for Linux) (optional)

See 1.21.2(3)(j) Add a VMware exporter scrape job (for Windows only) (optional).

(4) Setup of Alertmanager

(a) Changing Ports (For Linux) (optional)

The listen port used by Alertmanager is specified in --web.listen-address option of alertmanager command.

For details about how to change alertmanager command options, see 2.19.2(1)(c) Change command-line options (for Linux). For details of --web.listen-address option, see If you want to change command line options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20714". If port number is changed, review setup of the firewall and prohibit accessing from outside. However, if you want to monitor Alertmanager with external shape monitoring by Blackbox exporter in other host, allow it to be accessed. In such cases, consider security measures such as limiting the source IP address as required.

(b) Changing the alert notification destination (for Linux) (optional)

See 1.21.2(4)(b) Changing the alert notification destination (for Windows) (optional).

(c) Setup silence (on Linux) (optional)

See 1.21.2(4)(c) Setup silence (for Windows) (optional).

(5) Setup of Node exporter

(a) Changing Ports (optional)

The listen port used by Node exporter is specified in --web.listen-address option of node_exporter command.

For details about how to change the options of the node_exporter command, see 2.19.2(1)(c) Change command-line options (for Linux). For details of --web.listen-address option, see If you want to change command line options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20716". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Change metric to collect (optional)

  1. In the metric_relabel_configs of Prometheus configuration file (jpc_prometheus_server.yml), metric to be collected are defined separated by "|". Delete metric that you do not need to collect and add metric that you want to collect.

    For instructions on updating configuration file, see 1.21.2(1)(a) Edit the configuration files (for Windows).

    <Sample Setup>

      - job_name: 'jpc_node'
        :
        metric_relabel_configs:
          - source_labels: ['__name__']
            regex: 'node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_now|node_disk_io_time_seconds_total|node_disk_read_bytes_total|node_disk_reads_completed_total|.......|node_time_seconds|node_uname_info|node_vmstat_pswpin|node_vmstat_pswpout [Add metric here]'
            action: 'keep'
  2. If required, define a trend view in metric definetion file.

    In Node exporter metric definition file, you define a trend view.

    For descriptions, see Node exporter metric definition file (metrics_node_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

  3. Configure the service monitor settings.

    - Editing Node exporter's unit definition file (jpc_node_exporter.service).

    When monitoring services, edit the unit definition file (jpc_node_exporter.service) of Node exporter as shown in the underlined part below.

    [Unit]
    Description = JP1/IM3-Agent Linux metric collector
    After=local-fs.target remote-fs.target rsyslog.service network.target
    [Service]
    WorkingDirectory = @@installdir2@@/jp1ima/bin
    ExecStart = /bin/sh -c '"@@installdir1@@/jp1ima/bin/node_exporter" \
      --collector.cpu.info \
    (Omitted)
      --no-collector.supervisord \
      --collector.systemd \
      --collector.systemd.unit-include="Regular expressions to match unit file names" \
      --no-collector.tcpstat \
      --no-collector.textfile \
      --no-collector.thermal_zone \
    (Omitted)

    If "--no-collector.systemd" is set as the value of the argument to be set on the ExecStart line, change it to "--collector.systemd". If "--collector.systemd.unit-include" is not set, add a line and set the value to a regular expression that matches the unit file name of the unit you want to monitor.

    Depending on how regular expressions are specified, it may take some time to collect performance information. For details, see G.4 Tips on using regular expressions section in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

    - Node exporter unit definition file (jpc_node_exporter.service) definition example

    The following shows a sample configuration for monitoring jpc_imagent.service and jpc_imagentproxy.service units.

    [Unit]
    Description = JP1/IM3-Agent Linux metric collector
    After=local-fs.target remote-fs.target rsyslog.service network.target
    [Service]
    WorkingDirectory = @@installdir2@@/jp1ima/bin
    ExecStart = /bin/sh -c '"@@installdir1@@/jp1ima/bin/node_exporter" \
      --collector.cpu.info \
    (Omitted)
      --no-collector.supervisord \
      --collector.systemd \
      --collector.systemd.unit-include="^(jpc_imagent|jpc_imagentproxy)\.service$" \
      --no-collector.tcpstat \
      --no-collector.textfile \
      --no-collector.thermal_zone \
    (Omitted)

Among the units that correspond to the above specification, the units for which automatic startup is enabled or the status is being executed are monitored. For details, see the Specifying monitored services in 3.15.1(1)(d) Node exporter (Linux performance data collection capability) in the JP1/Integrated Management 3 - Manager Overview and System Design Guide.

(6) Setting up Process exporter

(a) Specifying monitored processes (required)

- Edit the Process exporter configuration file (jpc_process_exporter.yml)

Edit the Process exporter configuration file (jpc_process_exporter.yml) to define which processes are to be monitored.

By default, no process is to be monitored, and therefore you will uncomment the initial settings and then specify the processes you want to monitor in the Process exporter configuration file.

For details on the Process exporter configuration file, see Process exporter configuration file (jpc_process_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(b) Modifying monitoring metrics (optional)

- Edit the Prometheus configuration file (jpc_prometheus_server.yml)

If you want to change metrics to be collected, modify the metric_relabel_configs setting in the Prometheus configuration file (jpc_prometheus_server.yml).

For details on the Prometheus configuration file, see Prometheus configuration file (jpc_prometheus_server.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

- Edit the Process exporter metric definition file (metrics_process_exporter.conf)

If you want to change metrics displayed in the Trends tab of the integrated operation viewer, edit the settings in the Process exporter metric definition file (metrics_process_exporter.conf).

For details on the Process exporter metric definition file, see Process exporter metric definition file (metrics_process_exporter.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(c) Changing Ports (optional)

The listen port used by Process exporter is specified in --web.listen-address option of the process_exporter command.

For details about how to change the options of process_exporter command, see 2.19.2(1)(c) Change command-line options (for Linux). For details about --web.listen-address option, see If you want to change command line options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20721". If port number is changed, review setup of the firewall and prohibit accessing from outside.

Notes:
  • When specifying the host name for this option, the same host name must be set for targets in the Process exporter discovery configuration file (jpc_file_sd_config_process.yml) on the same host.

  • When specifying an IP address for this option, the host name that is resolved to the IP address specified for this option must be set for targets in the Process exporter discovery configuration file (jpc_file_sd_config_process.yml) on the same host.

(d) Optional setting to remove child processes from monitoring

By default, when monitoring a monitored process, Process exporter monitors the child process and retrieves the operational data including the child process.

If you want Process exporter to acquire the activity without including the child process, modify unit definition file of Process exporter as follows:

File to be edited
  • For physical hosts

    /usr/lib/systemd/system/jpc_process_exporter.service

  • For logical hosts

    /usr/lib/systemd/system/jpc_process_exporter_logical hostname.service

Editing data

Configure -children=false with the process_exporter boot options listed in ExecStart.

Setting example

ExecStart = /bin/sh -c '"/opt/jp1ima/bin/process_exporter" \

-Children=false \

-Debug=true \

--Config.path="/opt/jp1ima/conf/jpc_process_exporter.yml" \

(7) Setup of Blackbox exporter

(a) Changing Ports (For Linux) (optional)

The listen port used by Blackbox exporter is specified in --web.listen-address option of the blackbox_exporter command.

For details about how to change the options of blackbox_exporter command, see 2.19.2(1)(c) Change command-line options (for Linux). For details about --web.listen-address option, see If you want to change command line options in Unit definition file (jpc_program-name.service) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

The default port is "20715". If port number is changed, review setup of the firewall and prohibit accessing from outside.

(b) Add, Modify, or Delete a Module. (optional)

See 1.21.2(6)(b) Add, change, and delete modules (for Windows) (optional).

(c) Add, Modify, or Delete a monitoring target (for Linux) (required)

See 1.21.2(6)(c) Add, change, or Delete the monitoring target (for Windows) (required).

(d) Monitor HTTP through proxy (on Linux) (optional)

See 1.21.2(6)(d) Monitoring HTTP through proxy (for Windows) (optional).

(e) Setup the proxy authentication ID and Password (optional)

See 1.21.2(6)(e) Setup the proxy authentication ID and Password (for Windows) (optional).

(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (optional)

See 1.21.2(6)(f) Setup authentication ID, Password, and Bearer tokens for accessing the monitored Web Server (for Windows) (optional).

(8) Setup in Yet another cloudwatch exporter

(a) Changing Ports (For Linux) (optional)

See 1.21.2(7)(a) Changing Ports (For Windows) (optional).

(b) Modify Setup to connect to CloudWatch (for Linux) (required)

See 1.21.2(7)(b) Modify Setup to connect to CloudWatch (for Windows) (required).

(c) Connect to CloudWatch through a proxy (for Linux) (optional)

See 1.21.2(7)(c) Connect to CloudWatch through a proxy (for Windows) (optional).

(d) Add AWS Services to be Monitored (optional)

See 1.21.2(7)(d) Add AWS Services to be Monitored (optional).

(e) Monitoring AWS Resources (required)

See 1.21.2(7)(e) Monitoring AWS Resources (required).

(f) Modify metric to Collect (optional)

See 1.21.2(7)(f) Modify metric to Collect (optional).

(9) Setup of Promitor

See 1.21.2(8) Set up of Promitor.

(10) Setup of Fluentd

(a) Changing Setup of Common Definition file for Log Monitor (optional)

See 1.21.2(9)(a) Changing Setup of Common Definition file for Log Monitor (for Windows) (optional).

(b) Monitoring the text-formatted log file (required)

1.21.2(9)(b) Monitoring the text-formatted log file (for Windows) (required)".

(c) Modifying the monitoring settings of the text-formatted log file (optional)

See 1.21.2(9)(c) Modifying the monitoring settings of the text-formatted log file (for Windows) (optional).

(d) Deleting the monitoring settings of the text-formatted log file (optional)

See 1.21.2(9)(d) Deleting the monitoring settings of the text-formatted log file (for Windows) (optional).

(e) Setup of the log metrics definition (required)

See 1.21.2(9)(h) Setup of the log metrics definition (optional).

(f) Changing Ports (optional)

See 1.21.2(9)(i) Changing Ports (optional).

(g) Monitoring SAP system log information (optional)

See 1.21.2(9)(j) Monitoring SAP system logging (optional).

(h) Change monitoring settings for SAP system log information (optional)

See 1.21.2(9)(k) Modify SAP system logging monitoring configuration (optional)

(i) Delete the monitoring settings for log information in SAP systems (optional)

See 1.21.2(9)(l) Remove SAP system logging monitoring configuration (optional).

(j) Monitoring CCMS alert information for SAP systems (optional)

See 1.21.2(9)(m) Monitoring CCMS alerting for SAP system (optional).

(k) Change monitoring settings for CCMS alert information in SAP systems (optional)

See 1.21.2(9)(n) Modify SAP system CCMS alert information monitoring settings (optional).

(l) Delete the monitoring settings for CCMS alert information in SAP systems (optional)

See 1.21.2(9)(o) Remove SAP system CCMS alert information monitoring settings (optional).

(11) Setting up scraping definitions

See 1.21.2(10) Setting up scraping definitions.

(12) Setting up container monitoring

See 1.21.2(11) Setting up container monitoring.

(13) Editing the Script exporter definition file

See 1.21.2(12) Editing the Script exporter definition file.

(14) Setting up VMware exporter

See 1.21.2(14) Setting up VMware exporter.

(15) Specifying a listening port number and listening address (optional)

See 1.21.2(16) Specifying a listening port number and listening address (optional).

(16) Firewall Setup (for Linux) (required)

See 1.21.2(17) Firewall's Setup (for Windows) (required).

(17) Setup of integrated agent process alive monitoring (for Linux) (optional)

You monitor integrated agent processes in the following ways:

(a) External shape monitoring by other-host Blackbox exporter

Prometheus server and Alertmanager services monitors from Blackbox exporter of integrated agent running on other hosts. The following tables show URL to be monitored.

For details about how to Add HTTP monitor of Blackbox exporter, see 1.21.2(6)(c) Add, change, or Delete the monitoring target (for Windows) (required). For details about how to set the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional)".

For an example of the alert definitions to be monitored by HTTP Monitor of Blackbox exporter, see 1.21.2(18) Setup of integrated agent process alive monitoring (for Windows) (optional).

Table 2‒12: URL monitored by HTTP monitoring of Blackbox exporter

Service

URL to monitor

Prometheus server

http://host-name-of-integrated-agent:port-number-of-Prometheus-server/-/healthy

Alertmanager

http://host-name-of-integrated-agent:port-number-of-Alertmanager/-/healthy

(b) Alive Monitoring Processes by Process exporter

Imagentproxy service, imagentaction service, and Fluentd service are monitored by the operation status of the process monitor of Process exporter. The processes to be monitored are described in the following table.

For details about how to set up, see Process exporter configuration file (jpc_process_exporter.yml) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

For details about setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

Table 2‒13: Processes monitored by the Process exporter

Service

Processes to monitor

Remarks

imagent

Agent-path/bin/imagent

Set this when you want to detect a case that starts up quickly after imagent described in 3.15.8(2) Polling monitoring for JP1/IM agent control base in the JP1/Integrated Management 3 - Manager Overview and System Design Guide has stopped abnormally.

imagentproxy

Agent-path/bin/imagentproxy

Not applicable.

imagentaction

Agent-path/bin/imagentaction

Not applicable.

Fluentd

Agent-path/lib/ruby/bin/ruby

The "jpc_fluentd_common.conf" text on the command-line distinguishes it from ruby other than Fluentd.

Rotatelogs (only for Fluentd)

Agent-path/bin/rotatelogs

The "Agent-path/logs/fluentd" text on the command line distinguishs it from rotatelogs other than Fluentd.

The following is a sample Process exporter configuration file that is monitored by Process exporter.

process_names:
  - name: "{{.ExeBase}};{{.Username}};{{.Matches.cmdline}}"
    exe:
     - /opt/jp1ima/bin/imagent
     - /opt/jp1ima/bin/imagentproxy
     - /opt/jp1ima/bin/imagentaction
 
  - name: "{{.ExeBase}};{{.Username}};{{.Matches.cmdline}}"
    exe:
     - /opt/jp1ima/bin/rotatelogs
    cmdline:
     - (?P<cmdline>.*/opt/jp1ima/logs/fluentd\.*)
 
  - name: "{{.ExeBase}};{{.Username}};{{.Matches.cmdline}}"
    exe:
     - /opt/jp1ima/lib/ruby/bin/ruby
    cmdline:
     - (?P<cmdline>.*jpc_fluentd_common\.conf.*)

Here is a sample alertdefinition that Process exporter monitors:

groups:
  - name: process_exporter
    rules:
    - alert: jp1_pc_procmon_imagent
      expr: 1 >  sum by (program, instance, job, jp1_pc_nodelabel, jp1_pc_exporter) (namedprocess_namegroup_num_procs{program="imagent"})
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_eventid: "1303"
        jp1_pc_metricname: "namedprocess_namegroup_num_procs"
      annotations:
        jp1_pc_firing_description: "The number of processes was less than the threshold Value (1). value={{$value}}"
        jp1_pc_resolved_description: "The number of processes exceeded the threshold Value (1)."
 
    - alert: jp1_pc_procmon_imagentproxy
      expr: 1 >  sum by (program, instance, job, jp1_pc_nodelabel, jp1_pc_exporter) (namedprocess_namegroup_num_procs{program="imagentproxy"})
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_eventid: "1303"
        jp1_pc_metricname: "namedprocess_namegroup_num_procs"
      annotations:
        jp1_pc_firing_description: "The number of processes was less than the threshold Value (1). value={{$value}}"
        jp1_pc_resolved_description: "The number of processes exceeded the threshold Value (1)."
 
    - alert: jp1_pc_procmon_imagentactoin
      expr: 1 >  sum by (program, instance, job, jp1_pc_nodelabel, jp1_pc_exporter) (namedprocess_namegroup_num_procs{program="imagentaction"})
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_eventid: "1303"
        jp1_pc_metricname: "namedprocess_namegroup_num_procs"
      annotations:
        jp1_pc_firing_description: "The number of processes was less than the threshold Value (1). value={{$value}}"
        jp1_pc_resolved_description: "The number of processes exceeded the threshold Value (1)."
 
    - alert: jp1_pc_procmon_fluentd_rotatelogs Log trapper(Fluentd) #1
      expr: 1 >  sum by (program, instance, job, jp1_pc_nodelabel, jp1_pc_exporter) (namedprocess_namegroup_num_procs{program="rotatelogs"})
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_eventid: "1303"
        jp1_pc_metricname: "namedprocess_namegroup_num_procs"
      annotations:
        jp1_pc_firing_description: "The number of processes was less than the threshold Value (1). value={{$value}}"
        jp1_pc_resolved_description: "The number of processes exceeded the threshold Value (1). "
 
    - alert: jp1_pc_procmon_fluentd_ruby Log trapper(Fluentd) #2
      expr: 2 >  sum by (program, instance, job, jp1_pc_nodelabel, jp1_pc_exporter) (namedprocess_namegroup_num_procs{program="ruby"}) #3
      for: 3m
      labels:
        jp1_pc_product_name: "/HITACHI/JP1/JPCCS2"
        jp1_pc_component: "/HITACHI/JP1/JPCCS/CONFINFO"
        jp1_pc_severity: "Error"
        jp1_pc_eventid: "1303"
        jp1_pc_metricname: "namedprocess_namegroup_num_procs"
      annotations:
        jp1_pc_firing_description: "The number of processes was less than the threshold Value (2). value={{$value}}"
        jp1_pc_resolved_description: "The number of processes exceeded the threshold Value (2). "
#1

If only log metrics feature is used, specify "jp1_pc_procmon_fluentd_prome_rotatelogs Log trapper(Fluentd)".

#2

If only log metrics feature is used, specify "jp1_pc_procmon_fluentd_prome_ruby Log trapper(Fluentd)".

#3

The Ruby process starts the number of workers + 1. For the threshold, specify the number of workers + 1. For details on the number of workers, see Log monitoring common definition file (jpc_fluentd_common.conf) in Chapter 2. Definition Files in the JP1/Integrated Management 3 - Manager Command, Definition File and API Reference.

(c) Monitoring by Prometheus server up metric

Node exporter service, Process exporter service, Blackbox exporter service, and Yet another cloudwatch exporter service are monitored through Prometheus server alert-monitoring. For setting method of the alert definition, see 1.21.2(3)(b) To Add the alert definition (for Windows) (optional).

For an example of an alert definition that monitors up metric, see 1.21.2(18) Setup of integrated agent process alive monitoring (for Windows) (optional).

(18) Creating and importing IM management node tree data (for Linux) (required)

See 1.21.2(19) Creation and import of IM management node tree data (for Windows) (required).

(19) Security product exclusion Setup (for Linux) (optional)

See 1.21.2(20) Security-product exclusion Setup (for Windows) (optional).

(20) Notes on updating definition file (for Linux)

See 1.21.2(21) Notes on updating the definition file (for Windows).