Hitachi

JP1 Version 12 JP1/Performance Management - Remote Monitor for Virtual Machine Description, User's Guide and Reference


3.3.4 Setup procedure

This subsection explains the setup for operating Performance Management in a cluster system.

The setup procedure depends on the virtual environment to be monitored. The icon [Figure], [Figure], [Figure], [Figure], [Figure], [Figure], or [Figure]indicates a setup item required for the indicated virtual environment.

There are setup procedures for the executing node and for the standby node. Set up the executing node first and then set up the standby node.

The icon [Figure] indicates an item that must be set up for the executing node, and the icon [Figure] indicates an item that must be set up for the standby node. The icon [Figure] indicates an item that may be required depending on the environment that is used, or an optional setup item that is available for changing the default setting.

Important

Do not set JPC_HOSTNAME as an environment variable because it is used in Performance Management. If you do so, Performance Management will not operate correctly.

Organization of this subsection

(1) Registering PFM - RM for Virtual Machine [Figure] [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

To use PFM - Manager and PFM - Web Console to centrally manage PFM - RM for Virtual Machine, you need to register PFM - RM for Virtual Machine in PFM - Manager and PFM - Web Console.

The conditions and procedure for registering PFM - RM for Virtual Machine are the same as that used for a non-cluster system. For details about the procedure, see 2.1.4(1) Registering PFM - RM for Virtual Machine.

(2) Setting PFM - RM for Virtual Machine [Figure] [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

You need to specify the same settings on both the executing system and the standby system in PFM - RM for Virtual Machine. For details about the procedure, see 2.1.4(2) Setting PFM - RM for Virtual Machine.

You need to specify the same settings on both the executing node and the standby node.

(3) Bringing the shared disk online [Figure] [Figure] [Figure] [Figure] [Figure]

Confirm that the shared disk is online. If the shared disk is not online, bring it online using a cluster software operation or volume manager operation.

(4) Setting up the logical host of PFM - RM for Virtual Machine [Figure] [Figure] [Figure] [Figure] [Figure]

Create a logical host environment by executing the jpcconf ha setup command. Executing this command copies the necessary data to the shared disk, sets up a definition for the logical host, and creates a logical host environment.

Note:

Before executing the command, stop all Performance Management programs and services in the entire Performance Management system. For details about how to stop services, see the chapter that explains startup and termination of Performance Management in the JP1/Performance Management User's Guide.

To set up a logical host:

  1. Create a logical host environment of PFM - RM for Virtual Machine by executing the jpcconf ha setup command.

    Execute the command as follows.

    jpcconf ha setup -key RMVM -lhost jp1-halvm -d S:\jp1

    Specify a logical host name in the -lhost option. Here, jp1-halvm is used as the logical host name. If DNS is used, specify a logical host name with the domain name omitted.

    Specify the folder name of the shared disk in the environment folder name of the -d option. For example, if -d S:\jp1 is specified, S:\jp1\jp1pc is created and a logical host environment file is created.

    Important

    Make sure that the path name specified as the environment folder name does not include the following characters:

    ( )

    If at least one of the above characters is included, although the environment of the logical host might be successfully created, PFM - RM for Virtual Machine cannot start.

  2. Confirm the logical host settings by executing the jpcconf ha list command.

    Execute the command as follows.

    jpcconf ha list -key all

    Confirm that the created logical host environment is correct.

(5) Setting up PFM - Manager at the connection destination [Figure] [Figure] [Figure] [Figure] [Figure]

Set up the PFM - Manager that manages PFM - RM for Virtual Machine by executing the jpcconf mgrhost define command.

  1. Set up PFM - Manager at the connection destination by executing the jpcconf mgrhost define command.

    Execute the command as follows.

    jpcconf mgrhost define -host jp1-hal -lhost jp1-halvm

    Specify the host name of PFM - Manager at the connection destination in the -host. If the PFM - Manager instance at the connection destination is running on a logical host, specify the logical host name of PFM - Manager at the connection destination in the -host. Here, jp1-hal is used as the logical host name of PFM - Manager.

    Specify the logical host name of PFM - RM for Virtual Machine in the -lhost option. Here, jp1-halvm is used as the logical host name of PFM - RM for Virtual Machine.

(6) Setting up an instance environment [Figure] [Figure] [Figure] [Figure] [Figure]

Set up an instance environment for PFM - RM for Virtual Machine by executing the jpcconf inst setup command.

The setup procedure is the same as that used for a non-cluster system. Note, however, that in the case of a cluster system, you must specify a logical host name in the -lhost option when you execute the jpcconf inst setup command.

The procedure for specifying the jpcconf inst setup command for a cluster system is as follows.

jpcconf inst setup -key RMVM -lhost jp1-halvm -inst inst1

Although an example of interactive command execution is shown here, the jpcconf inst setup command can be also executed non-interactively. For details about the jpcconf inst setup command, see the chapter that describes commands in the manual JP1/Performance Management Reference.

For other setting details and procedures, see 2.1.4(3) Setting up an instance environment.

(7) Setting up monitoring targets [Figure] [Figure] [Figure] [Figure] [Figure]

Set up the monitoring-target hosts of PFM - RM for Virtual Machine by executing the jpcconf target setup command.

The setup procedure is the same as that used for a non-cluster system. Note, however, that in the case of a cluster system, you must specify a logical host name with the -lhost option when you execute the jpcconf target setup command.

The following shows an example of the jpcconf target setup command that can be executed in a cluster system.

jpcconf target setup -key RMVM -lhost jp1-halvm -inst inst1 -target targethost1 

Although an example of interactive command execution is shown here, the jpcconf target setup command can be also executed non-interactively. For details about the jpcconf target setup command, see the chapter that describes commands in the manual JP1/Performance Management Reference.

For details about other setting and procedures, see 2.1.4(4) Setting up monitoring targets.

(8) Setting up the logical hosts of other Performance Management programs [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

Besides PFM - RM for Virtual Machine, if there are other programs such as PFM - Manager and PFM - Agent or PFM - RM that must be set up on the same logical host, set them up at this stage.

For details about how to perform setup, see the chapter that explains configuration and operations in a cluster system in the JP1/Performance Management User's Guide.

(9) Settings for each monitoring target [Figure] [Figure] [Figure]

This section describes the settings required for each monitored virtual environment.

(a) For VMware[Figure]

Confirm that the following condition is satisfied:

  • Encrypted communication necessary for SSL/TSL connections with monitored hosts is configured on both the executing node and the standby node.

For details about the SSL/TLS connection settings, see 2.5.1 For VMware.

Use user defined records to monitor performance information that is not retrieved by PFM - RM for Virtual Machine. For details about the user defined records, 2.5.1(6) User defined records.

(b) For Hyper-V[Figure]

Confirm that the following condition is satisfied:

  • The same user account that can connect to the monitored host via WMI is created on both the executing node and the standby node.

For details about the WMI connection settings, see 2.5.2 For Hyper-V.

(c) For KVM[Figure]

Confirm that the following conditions are satisfied:

  • A private key is stored at the same path on both the executing node and the standby node.

  • The private key can be used to establish SSH connection with a monitored host.

  • When SSH_Type is set to putty, PuTTY is installed in the same path on both the executing node and the standby node.

    If you use OpenSSH, which comes with Windows Server 2019, as the SSH client, you do not have to install PuTTY.

    Important

    Use either of the following methods to register the private key and public key:

    • After creating a private key on the active server and copying the private key to the standby server, pair the private key with the public key that is distributed from the active server to the monitored host.

    • After creating public keys separately on the active and standby servers, register both of these public keys on the monitored host so that either of the public keys can be paired with the private key.

For details about the SSH connection settings, see 2.5.7 SSH connection settings.

(d) For Docker environment[Figure]

Confirm that the following conditions are satisfied:

  • The CA and client certificates that are necessary for SSL/TLS connection with a monitored host are installed on both the executing node and the standby node.

For details about the SSL/TLS connection settings, see 2.5.4 For Docker environment.

(e) For Podman environment [Figure]

The scenario for KVM also applies. See (c) For KVM.

(f) For logical partitioning feature[Figure]

Confirm that the following conditions are satisfied:

  • In the environment with logical partitioning feature to be monitored, set the IP address of the machine on which the monitoring agent is installed.

For details, see 2.5.6 For logical partitioning feature.

(10) Setting up the network [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

This is necessary only when you want to change the setup according to the network configuration that uses Performance Management.

During network setup, you can specify the following two items:

(11) Changing the log file size [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

The operation status of Performance Management is output to a log file specific to Performance Management. This log file is called the common message log. This setting is necessary only when you want to change the size of this file.

For details, see the chapter that explains installation and setup in the JP1/Performance Management Planning and Configuration Guide.

(12) Changing the performance data storage destination [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

This setting is necessary only when you want to change the folder at the save destination, the backup destination, the export destination, or the import destination for the database that stores the performance data managed by PFM - RM for Virtual Machine.

For details about this setting, see 2.1.4(8) Changing the performance data storage destination.

(13) Setting up the action log output [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

This setting is necessary to output action logs when a PFM service starts up or stops, or when the connection status to PFM - Manager changes. An action log contains history information that is output in conjunction with the alarm function, which monitors items such as the system load threshold.

For details about this setting, see I. Outputting Action Log Data.

(14) Exporting the logical host environment definition file [Figure] [Figure] [Figure] [Figure] [Figure]

When you have created a logical host environment for PFM - RM for Virtual Machine, export the environment definitions to a file. An export operation immediately outputs to a file the Performance Management program definition information that is set up on the logical host. If you are setting up other Performance Management programs on the same logical host, export the information after all of these programs have been set up.

To export the logical host environment definitions:

  1. Export the logical host environment definitions by executing the jpcconf ha export command.

    The logical host environment definition information that has been created up until now is output to an export file. You can use any desired export file name.

    For example, to export the logical host environment definitions to the lhostexp.txt file, execute the command as follows.

    jpcconf ha export -f lhostexp.txt

(15) Copying the logical host environment definition file to the standby node [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

Copy the logical host environment definition file exported in (14) Exporting the logical host environment definition file from the executing node to the standby node.

(16) Taking the shared disk offline [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

Using a cluster software operation or volume manager operation, take the shared disk offline to complete the task. If you plan to continue using the shared disk, there is no need to take it offline.

(17) Importing the logical host environment definition file [Figure] [Figure] [Figure] [Figure] [Figure]

Import the export file copied from the executing node onto the standby node.

To specify the setting for executing the Performance Management program of the logical host created in the executing node on the standby node, use the jpcconf ha import command. If multiple Performance Management programs have been set up for a single logical host, all logical host environment definition files will be imported at once.

When you execute this command, there is no need to bring the shared disk online.

To import the logical host environment definition file:

  1. Import the logical host environment definitions by executing the jpcconf ha import command.

    Execute the command as follows.

    jpcconf ha import -f lhostexp.txt

    When the command is executed, the settings of the standby node environment are changed to match the content of the export file. As a result, the settings for starting PFM - RM for Virtual Machine of the logical host are implemented.

    If a fixed port number was set by the jpcconf port command during setup, it will be set in the same way.

  2. Confirm the logical host settings by executing the jpcconf ha list command.

    Execute the command as follows.

    jpcconf ha list -key all

    Confirm that the command displays the same content that was displayed when the jpcconf ha list command was executed at the executing node.

(18) Registering PFM - RM for Virtual Machine in cluster software [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

To operate Performance Management programs in a logical host environment, you need to register them in cluster software, and set up the environment such that the Performance Management programs can be started and stopped based on controls from the cluster software.

For details about how to register PFM - RM for Virtual Machine in cluster software, see the cluster software documentation.

The items registered in Windows WSFC are used as an example to describe the settings to be specified upon the registration of PFM - RM for Virtual Machine in the cluster software.

For PFM - RM for Virtual Machine, register in the cluster software the services described in the following table.

For the dependency settings specified if PFM - RM for Virtual Machine is on the PFM - Manager logical host, see the chapter on cluster system setup and operation in the JP1/Performance Management User's Guide.

Table 3‒3: PFM - RM for Virtual Machine services to register in cluster software

No.

Name

Service name

Resource dependencies

1

PFM - RM Store for Virtual Machine instance-name [LHOST]

JP1PCAGT_8S_instance-name [LHOST]

IP address resource#1

Physical disk resource#2

2

PFM - RM for Virtual Machine instance-name [LHOST]

JP1PCAGT_8A_instance-name [LHOST]

Cluster resource for Item No. 1

3

PFM - Action Handler [LHOST]

JP1PCMGR_PH [LHOST]

IP address resource#1

Physical disk resource#2

#1

IP address resource defined in the cluster environment of the virtual environment

#2

Shared disk resource

Replace [LHOST] with a logical host name. If the instance name is inst1 and the logical host name is jp1-halvm, the name becomes PFM - RM Store for Virtual Machine inst1 [jp1-halvm], and the service name becomes JP1PCAGT_8S_inst1 [jp1-halvm].

For WSFC, register these services as WSFC resources. Set each resource as follows:

Note:
  • Since services that are registered in the cluster software are started and stopped from the cluster, set their Startup type to Manual so that they do not automatically start when the OS starts. Immediately after a service is set up by the jpcconf ha setup command, it is set to Manual.

    Do not perform a forced stop by executing the following command:

jpcspm stop -key all -lhost jp1-halvm -kill immediate
  • When PFM - RM for Virtual Machine links with the integrated management product JP1/Integrated Management, set dependencies so that PFM - RM for Virtual Machine services stop before JP1/Base services stop.

(19) Confirming startup and stop from the cluster software [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

From the cluster software, try to start and stop the Performance Management programs at each node, and confirm that the starting and stopping operations work normally.

(20) Setting up the cluster system environment [Figure] [Figure] [Figure] [Figure] [Figure] [Figure]

Once you have finished setting up the Performance Management programs, set up the environment for these programs so that a report can be displayed from PFM - Web Console on the operation status of a monitored target according to the operating mode, and so that a user notification can be sent when a problem occurs on the monitored target.

For details about how to set up the environment for Performance Management programs, see the chapter that explains configuration and operations in a cluster system in the JP1/Performance Management User's Guide.