Hitachi

JP1 Version 12 JP1/Performance Management - Remote Monitor for Platform Description, User's Guide and Reference


5.3.1 Items to be checked before installing in a cluster system (for Windows)

This subsection describes items to be checked before you start installation of PFM - RM for Platform.

Organization of this subsection

(1) Prerequisites

Following are the prerequisites for using PFM - RM for Platform in a cluster system.

(a) Cluster system

Make sure that the following conditions are satisfied:

  • The cluster system is controlled by cluster software.

  • The cluster software is set up in such a manner that it controls startup and termination of the PFM - RM for Platform that is running on the logical host.

  • Both the executing node and the standby node are set up to disable error reporting to Microsoft.

    In Windows, if an application error occurs, a dialog box will be displayed to report the error to Microsoft. If this dialog box is displayed, a failover could not occur. Therefore, you must disable error reporting. If the nodes have not been set up to disable error reporting, take the following steps.

In Windows Server 2012
  1. Choose Control Panel > System and Security > Action Center > Maintenance.

  2. In Check for solutions to unreported problems, click Settings.

  3. In the Windows Error Reporting Configuration dialog box, choose I don't want to participate, and don't ask me again.

  4. Click the OK button.

In Windows Server 2016 and later
  1. Right-click the Windows Start menu and then choose Run from the displayed menu.

  2. Enter gpedit.msc, and then click the OK button.

    The Local Group Policy Editor appears.

  3. Click Computer Configuration, Administrative Templates, Windows Components, and then Windows Error Reporting.

  4. In the right pane, right-click Disable Windows Error Reporting, and then from the displayed menu, choose Edit.

    The setting window appears.

  5. In the setting window, select the Enabled check box.

  6. Click the OK button.

(b) Shared disk

Make sure that the following conditions are satisfied:

  • A shared disk is available to each logical host and information can be inherited from the executing node to the standby node.

  • The shared disk is connected to each node physically by Fibre Channel or SCSI.#1

  • The shared disk can be placed offline forcibly by means such as the cluster software in order to implement failover even when there is still an active process that is using the shared disk.

  • If multiple PFM products are running on the same logical host, the shared disk uses the same directory names.#2

#1

Performance Management does not support a configuration that uses a network drive or a disk replicated via the network as the shared disk.

#2

You can change the storage location of the Store database and store it in a different folder on the shared disk.

(c) Logical host names and logical IP addresses

Make sure that the following conditions are satisfied:

  • Each logical host has a logical host name and a corresponding logical IP address, and that this information can be inherited from the executing node to the standby node.

  • The logical host names and logical IP addresses are set in the hosts file and name server.

  • If DNS operation is employed, the host name without the domain name is used as the logical host name, not the FQDN name.

  • All physical and logical host names are unique within the system.

Important
  • Do not specify a physical host name (host name displayed by the hostname command) as a logical host name. If you do so, normal communication processing might not occur.

  • A logical host name is expressed using from 1 to 32 bytes of alphanumeric characters. None of the following symbols nor the space character can be used:

    / \ : ; * ? ' " < > | & = , .

  • For a logical host name, you cannot specify localhost, an IP address, or a host name beginning with a hyphen (-).

(d) Settings for using IPv6

Performance Management supports both IPv4 and IPv6 network environments. Therefore, you can run Performance Management even in a network environment where IPv4 and IPv6 coexist.

PFM - RM for Platform can use IPv6 to communicate with PFM - Manager. However, this applies only when the OS of the hosts on which PFM - RM for Platform and PFM - Manager are installed are Windows or Linux. For details about the applicable scope of communication in the IPv4 and IPv6 environments, see M. Communication in IPv4 and IPv6 Environments.

To communicate in IPv6, you must enable the use of IPv6 on both the PFM - Manager host and the PFM - RM host. You specify this setting by executing the jpcconf ipv6 enable command. The following explains the conditions to use for determining whether you need to execute this command.

Cases in which you need to execute the jpcconf ipv6 enable command:
  • When all hosts are being changed from an IPv4 environment to an IPv6 environment

  • In an environment where IPv4 and IPv6 coexist and PFM - Manager is being changed from an IPv4 environment to an IPv6 environment

Cases in which you do not need to execute the jpcconf ipv6 enable command:
  • When all hosts are already in an IPv6 environment

  • In an environment where IPv4 and IPv6 coexist and PFM - Manager is already in an IPv6 environment

An example of executing the jpcconf ipv6 enable command follows:

jpcconf ipv6 enable

Execute the jpcconf ipv6 enable command separately on the executing node and on the standby node.

For details about the jpcconf ipv6 enable command, see the chapter that describes commands in the manual JP1/Performance Management Reference. For details about the conditions and timing for executing the jpcconf ipv6 enable command, see the chapter that describes an example of a network configuration that includes an IPv6 environment in the JP1/Performance Management Planning and Configuration Guide.

When PFM - RM for Platform will use IPv6 to communicate with monitored hosts, specify a monitored host name that can be resolved.

PFM - RM for Platform uses a resolvable IP address to communicate with a monitoring target. When PFM - RM for Platform communicates with a monitoring target in an environment where IPv4 and IPv6 coexist, PFM - RM for Platform will not try to communicate using another IP address if communication using a resolvable IP address fails.

For example, if a connection attempt using IPv4 fails, PFM - RM for Platform will not retry using IPv6. Conversely, if a connection attempt using IPv6 fails, PFM - RM for Platform will not retry using IPv4. Therefore, make sure beforehand that connection can be established.

(e) WMI connection

Make sure that the following conditions are satisfied:

  • The same user account that can connect to the monitored hosts by using WMI is available in the environments for both the executing node and the standby node.

For details about the WMI connection settings, see 3.1.5 WMI connection setting method (when both the PFM - RM host and the monitored host are running Windows).

(f) SSH connection

Make sure that the following conditions are satisfied:

  • A private key using the same path is available in the environments for both the executing node and the standby node.

  • That private key can be used to connect to the monitored hosts.

  • PuTTY is installed on the same path in the environments for both the executing node and the standby node.

    Note: When using OpenSSH (supplied with Windows Server 2019) as the SSH client, you do not need to install PuTTY.

  • The same Perl (either ActivePerl or Strawberry Perl) is installed on the same path in the environments for both the executing node and the standby node.

Note:

Use one of the following methods to register the private and public keys:

  • Copy the private key created on the executing server to the standby server, and then establish its correspondence with the public key that is distributed from the executing server to the monitored host.

  • Create public keys on both the executing and standby servers, and then establish correspondence between them by registering both public keys on the monitored hosts.

For details about the SSH connection settings, see 3.1.6 SSH connection setting method for Windows (when the PFM - RM host is running Windows and the monitored host is running UNIX).

(2) Information needed for setting up PFM - RM for Platform for logical host operation

If you run PFM - RM for Platform on a logical host, you need the information listed in the table below in addition to the environment information that is needed for setting up a normal PFM - RM for Platform.

Table 5‒1: Information needed for setting up PFM - RM for Platform for logical host operation

No.

Item

Example

1

Logical host name

jp1-halrmp

2

Logical IP address

172.16.92.100

3

Shared disk

S:\jp1

If multiple Performance Management programs are running on the same logical host, all of them must use folders on the same shared disk.

For details about the space requirements on the shared disk, see A. Estimating System Requirements.

(3) Notes about logical host failover

If you employ a system configuration in which PFM - RM for Platform runs on a logical host, evaluate whether the entire logical host should failover in the event of a PFM - RM for Platform failure.

If a PFM - RM for Platform failure is to result in failover of the entire logical host, any other job application that is running on the logical host will also result in failover, which might affect the job adversely.

Typically, we recommend that you use one of the following cluster software settings so that errors on PFM - RM for Platform do not affect the operation of other applications:

(4) Notes about upgrading when logical operation is used

To upgrade a PFM - RM for Platform that is running on a logical host, you must place the shared disk online at either the executing node or the standby node.