Hitachi

JP1 Version 12 JP1/Performance Management User's Guide


10.4.2 Installing and setting up PFM - Manager

Organization of this subsection

(1) Process flow for installation and setup

The following figure shows the process flow for installation and setup of PFM - Manager used on a logical host.

Figure 10‒20: Process flow for installation and setup of PFM - Manager used on a logical host (in UNIX)

[Figure]

Important
  • When PFM - Manager in a logical host environment is set up, PFM - Manager for the physical host environment can no longer be executed. However, the Action Handler service can still be executed, because it uses PFM - Agent or PFM - RM in the physical host environment.

    When unsetup is performed on PFM - Manager in a logical host environment, PFM - Manager in the physical host environment can once again be executed.

  • When PFM - Manager in a logical host environment is set up, the definitions for PFM - Manager in the physical host environment are inherited by the logical host environment. However, the content of the Store database is not inherited. If unsetup is performed on PFM - Manager in the logical host environment, the definitions for the logical host environment and the Store database are deleted, and therefore switching to the physical host environment is not possible.

  • Do not manually set JPC_HOSTNAME as an environment variable because it is used for Performance Management as an environment variable. If you specify this setting, Performance Management will not run correctly.

  • For PFM - Manager of the version 09-00 or later, when you set up a new instance of PFM - Manager in a logical host environment, the settings of the health check function in the physical host environment are inherited by the logical host environment. You must modify the settings of the health check function, if necessary.

  • In a logical host environment, the function for setting monitoring-host names cannot be used. The jpccomm.ini file on a logical host is ignored and the host name for the logical host is used.

  • If the monitoring suspension function is enabled, monitoring for all the hosts and agents must be resumed before setup.

In the procedure explanation, the image [Figure] indicates items to be performed on the executing node, and the image [Figure] indicates items to be performed on the standby node. In addition, the image [Figure] indicates setup items that are either required depending on the environment or can be performed if you want to set a value other than the default settings.

(2) Installation procedure [Figure] [Figure]

Perform a new installation of PFM - Manager on the executing node and the standby node. The installation procedure is the same as for a non-cluster system. For details, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

Note:

The installation destination is the local disk. Do not install PFM - Manager on the shared disk.

(3) Setup procedure

Perform PFM - Manager setup on the executing node first. Next, export the logical host environment definitions for the executing node to a file. Finally, import the file containing the environment definitions to the standby node to apply the setup content from the executing node to the standby node.

Figure 10‒21: Method for applying the content set up on the executing node to the standby node

[Figure]

Each setup procedure is explained below.

(a) Specifying the LANG environment variable [Figure] [Figure]

Specify the LANG environment variable on the executing node and the standby node.

For details about how to specify the LANG environment variable, see the section on setting the LANG environment variable in the JP1/Performance Management Planning and Configuration Guide.

(b) Performing an additional setup for PFM - Agent or PFM - RM information[Figure] [Figure] [Figure]

To perform integrated management of PFM - Agent or PFM - RM in a cluster system, register the agent information of PFM - Agent or PFM - RM in PFM - Manager for the executing node and the standby node.

If PFM - Agent or PFM - RM is already registered in PFM - Manager, you do not have to follow the procedure described below. If PFM - Agent or PFM - RM is not registered yet, manually register PFM - Agent or PFM - RM according to the procedure.

When all of the following conditions apply, manually register PFM - Agent or PFM - RM in PFM - Manager:

  • The PFM - Agent or PFM - RM to be installed is of a product version that is not specified in the Release Notes for PFM - Manager.

  • PFM - Agent or PFM - RM is installed on a host other than PFM - Manager.

If, however, the Release Notes for PFM - Agent or PFM - RM state that it is necessary to execute the setup command, execute the setup command.

The setup procedure is the same as for a non-cluster system. For details, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

(c) Making sure the shared disk is mounted [Figure]

Make sure that the shared disk is mounted. If the shared disk is not mounted, execute the mount command to mount the file system.

Note:

If setup is performed without mounting the shared disk, files might be created on the local disk.

(d) Setting up a logical host for PFM - Manager [Figure]

Set up the logical host environment for PFM - Manager on the executing node. Before performing setup, stop all the Performance Management programs and services throughout the entire system.

  1. Create a logical host environment.

    Execute the jpcconf ha setup command to create a logical host environment for PFM - Manager.

    Use -lhost to specify the logical host name. For DNS operation, specify a logical host name that does not include a domain name. Specify the -d environment directory name for the directory name of the shared disk.

    For example, execute the following command to set up a logical host with jp1-ha1 as the logical host name and /usr/jp1 as the environment directory.

    jpcconf ha setup -key Manager -lhost jp1-ha1 -d /usr/jp1

    When this command is executed, the jp1pc directory is created under /usr/jp1, and the files required in the logical host environment are copied to the environment directory. The following figure shows an example.

    Figure 10‒22: Execution example of the jpcconf ha setup command

    [Figure]

    When the command is executed, the required data is copied from the local disk of the executing node to the shared disk, and the settings required for use on the logical host are performed.

    When the PFM - Manager's logical host is set up, the connection-target PFM - Manager in the physical host environment is renamed to the specified logical host name.

    For details on the jpcconf ha setup command, see the chapters that describes commands in the manual JP1/Performance Management Reference.

  2. Check the settings for the logical host environment.

    Execute the jpcconf ha list command to check the settings for the logical host, and make sure that the logical host environment that has been created is correct.

    jpcconf ha list -key all

    An example of executing this command is as follows:

    [Figure]

    For details on the jpcconf ha list command, see the chapters that describe commands in the manual JP1/Performance Management Reference.

(e) Performing a setup for a logical host of PFM - Agent or PFM - RM[Figure] [Figure]

This procedure is required only when there is a PFM - Agent or PFM - RM to set up in the same logical host in addition to PFM - Manager.

For details on the setup procedure, see the chapters that describe operations on cluster systems in the appropriate PFM - Agent or PFM - RM manual.

(f) Specifying network settings [Figure]

To allow communication between PFM - Manager and PFM - Web Console using a logical host name or logical IP address, add the following line to jpcvsvr.ini (environment-directory/jp1pc/mgr/viewsvr/jpcvsvr.ini):

java.rmi.server.hostname=logical-host-name-or-logical-IP-address

For details on the host names used for communication between PFM - Manager, PFM - Web Console, and JP1/SLM, see the sections that describe port numbers in the manual JP1/Performance Management Reference.

In addition, use the following procedure when changing IP addresses and port numbers according to the network configuration.

  • Setting up IPv6 communication

    When using Performance Management in an IPv6 environment, enable IPv6 support by executing the jpcconf ipv6 enable command on the PFM - Agent, PFM - RM, and PFM - Manager hosts.

    In a cluster system, execute the command on the executing and standby nodes.

    Note that only IPv4 communication is supported between PFM - Manager and PFM - Web Console.

    For details, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

  • Setting the IP address [Figure]

    To set the IP addresses, directly edit the content of the jpchosts file. If you have edited the jpchosts file, copy the file from the executing node to the standby node.

    For details on setting IP addresses, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

  • Setting port numbers [Figure]

    This procedure is necessary only when running Performance Management in a network environment with a firewall.

    For Performance Management communications via a firewall, use the jpcconf port define port command to set a port number.

    For example, execute the following command to set all port numbers for services that exist on the host with the logical host name jp1-ha1 specified in the fixed values.

    jpcconf port define -key all -lhost jp1-ha1

    When this command is executed, definitions of the port number and service name (TCP service name beginning with jp1pc by default) for Performance Management are added to the services file.

    For details on setting port numbers, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

    In this example, the jpcconf port define command is executed in interactive mode. However, the command can also be executed in non-interactive mode. For details on the jpcconf port define command, see the chapters that describe commands in the manual JP1/Performance Management Reference.

  • Setting the host name or IP address used for communication with PFM - Web Console and JP1/SLM

    In the following situations, define the host name or IP address of PFM - Manager in the jpcvsvr.ini file on the PFM - Manager host.

    • IP address translation (NAT translation) takes place between the PFM - Manager host and the PFM - Web Console host.

    • Multiple IP addresses are used between the PFM - Manager host and the PFM - Web Console host.

    • When linking with JP1/SLM, IP address translation (NAT translation) takes place between the PFM - Manager host and the JP1/SLM host.

    • When linking with JP1/SLM, multiple IP addresses are used between the PFM - Manager host and the JP1/SLM host.

    For details, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

(g) Changing the log file size [Figure] [Figure]

The operating status of Performance Management is output to a dedicated log file called the common message log. This setting is required when if you want change this file size.

For details, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

(h) Specifying settings for the authentication mode [Figure] [Figure]

This setting is required only if you want to change the authentication mode of Performance Management from PFM authentication mode to JP1 authentication mode.

For details, see 2. Managing User Accounts and Business Groups.

(i) Specifying settings for access control based on business groups [Figure] [Figure]

This setting is required if you want to use business groups to manage users in Performance Management. You can enable or disable access control based on business groups by entering a setting in the startup information file (jpccomm.ini).

For details, see 2. Managing User Accounts and Business Groups.

(j) Changing the storage locations of event data [Figure] [Figure]

The settings below are required if you want to change the storage destination, backup destination, or export destination of the event data managed by PFM - Manager.

By default, event data is stored in the following locations:

  • Data storage folder: environment-directory/jp1pc/mgr/store/

  • Backup folder: environment-directory/jp1pc/mgr/store/backup/

  • Export folder: environment-directory/jp1pc/mgr/store/dump/

For details on how to change a destination, see the chapter describing installation and setup (in UNIX) in the JP1/Performance Management Planning and Configuration Guide.

(k) Specifying settings for action log output [Figure] [Figure]

This setting is required if you want to output an action log when an alarm is issued. An action log is log information output in conjunction with the alarm function, when an aspect of the system (such as the system load) exceeds a threshold. For details on how to set this option, see the section describing action log output in an appendix of the JP1/Performance Management Planning and Configuration Guide.

(l) Configuring the health check function [Figure] [Figure]

  1. Check the settings of the health check function.

    Execute the following command on the PFM - Manager host on the executing node to display the setting of the health check function.

    jpcconf hc display

    When the command is executed, the setting for the health check function appears as follows:

    • If the health check function is enabled: available

    • If the health check function is disabled: unavailable

    For details on the jpcconf hc display command, see the chapters that describe commands in the manual JP1/Performance Management Reference.

  2. Change the setting of the health check function.

    Execute the following command on the PFM - Manager host on the executing node to set up the health check function, if necessary.

    • To enable the health check function:

    jpcconf hc enable
    • To disable the health check function:

    jpcconf hc disable

    For details on the jpcconf hc enable and jpcconf hc disable commands, see the chapter explaining the commands in the manual JP1/Performance Management Reference.

(m) Exporting the logical host environment definitions [Figure]

When a logical host environment for PFM - Manager is created on the executing node, apply the settings information for the executing node to the standby node. First, export the logical host environment definitions for the executing node to a file. To set up a different instance of Performance Management on the same logical host, perform an export after all setup procedures are completed.

  1. Execute the jpcconf ha export command.

    Export the logical host environment definitions to the desired file.

    For example, execute the following command to export the logical host environment definitions to the lhostexp.conf file.

    jpcconf ha export -f lhostexp.conf

    If the health check function is enabled for the PFM - Manager in the logical host environment you are exporting, the health check agent will be set up on the logical host. In this case, information relating to the health check agent will be exported.

    In this example, the jpcconf ha export command is executed in interactive mode. However, the command can also be executed in non-interactive mode. For details on the jpcconf ha export command, see the chapters that describe commands in the manual JP1/Performance Management Reference.

(n) Copying the file containing the logical host environment definitions to the standby node [Figure] [Figure]

Copy the file that has been exported in step (m) from the executing node to the standby node, so that it will be applied on the standby node.

Next, unmount the file system to complete the work. If this shared disk will continue to be used, it is not necessary to unmount the file system.

Note:

Even if the shared disk is unmounted, if there is a jp1pc directory and associated files in the specified environment directory, setup is performed without mounting the shared disk. If that is the case, use the following procedure:

  1. Use the tar command to archive the jp1pc directories in the environment directory specified on the local disk.

  2. Mount the shared disk.

  3. If the specified environment directory does not exist on the shared disk, create an environment directory.

  4. Expand the tar file in the environment directory on the shared disk.

  5. Unmount the shared disk.

  6. Delete the jp1pc directory and associated files in the environment directory specified on the local disk.

(o) Importing the file containing the logical host environment definitions [Figure]

Import the export file copied from the executing node into the standby node.

  1. Execute the jpcconf ha import command.

    Import the logical host environment definitions into the standby node.

    For example, execute the following command if the export file name is lhostexp.conf.

    jpcconf ha import -f lhostexp.conf

    When the jpcconf ha import command is executed, the environment settings for the standby node are changed to the same environment as for the executing node. Therefore, settings are made to use PFM - Manager on a logical host.

    If the health check function is enabled for the PFM - Manager in the logical host environment you are importing, the health check agent will be set up on the logical host. In this case, information relating to the health check agent will be imported.

    In this example, the jpcconf ha import command is executed in interactive mode. However, the command can also be executed in non-interactive mode. For details on the jpcconf ha import command, see the chapters that describe commands in the manual JP1/Performance Management Reference.

  2. Check the settings for the logical host environment.

    Execute the jpcconf ha list command in the same manner as for the executing node to check the settings of the logical host.

    Execute the command as follows:

    jpcconf ha list -key all

    For details on the jpcconf ha list command, see the chapters that describe commands in the manual JP1/Performance Management Reference.

(4) Cluster software setting procedure

Cluster software settings are required for both the executing node and the standby node.

(a) Registering PFM - Manager in the cluster software [Figure] [Figure]

To use PFM - Manager on a logical host, register it in the cluster software, and set the cluster software to control the starting and stopping of PFM - Manager.

For details on how to register PFM - Agents or PFM - RM in the cluster software, see the chapters that describe operations on cluster systems in the appropriate PFM - Agent or PFM - RM manual.

Generally, the following four command items are required when registering an application in the UNIX cluster software: Start, Stop, Monitor operations, and Forced stop.

The following table lists and describes the settings in PFM - Manager.

Table 10‒8: Control commands for PFM - Manager registered in the cluster software

Command

Description

Start

Execute the following commands in order, and then start PFM - Manager:

/opt/jp1pc/tools/jpcspm start -key Manager -lhost logical-host-name

/opt/jp1pc/tools/jpcspm start -key AH -lhost logical-host-name

Perform this action after a condition has been reached in which the shared disk and logical IP address can be used.

Stop

Execute the following commands in order, and then stop PFM - Manager:

/opt/jp1pc/tools/jpcspm stop -key AH -lhost logical-host-name

/opt/jp1pc/tools/jpcspm stop -key Manager -lhost logical-host-name

Perform this action before a condition is reached in which the shared disk and logical IP address cannot be used.

When a service is stopped due to a problem, the return value for the jpcspm stop command is 3. If this is the case, it can be considered a normal termination since the services have been stopped. For the cluster software that determines execution results by return values, the recovery value can be set to 0.

Monitor operations

Use the ps command to check if the following process is running.

ps -ef | grep "process-name logical-host-name"

For details on process names, see 10.6.1(3) Service names. Hitachi recommends that you prepare a command for suppressing operation monitoring (for example, a command to stop monitoring when there is a file that is under maintenance) in anticipation of a temporary stop in Performance Management due to maintenance during the operation.

Remarks:

If you specify ps -ef | grep "process-name logical-host-name", an unnecessary grep row might be displayed as shown below.

jpcah logical-host-name -d /home/pfm/clu/jp1pc/bin/action

grep xxxx

In this case, specify grep -v "grep xxxx" to stop the unnecessary grep row from being displayed.

(The specification of grep -v is just an example. If the grep row is still displayed, check the grep command arguments of the OS.)

Example:

ps -ef | grep "process-name logical-host-name" | grep -v "grep xxxx"

Forced stop

Execute the following command when a forced stop is required:

/opt/jp1pc/tools/jpcspm stop -key all -lhost logical-host-name -kill immediate

Only all can be set to the service key for the first argument.

Note:

If this command is executed, SIGKILL is sent to perform a forced stop of all Performance Management processes in the specified logical host environment. At this time, the forced stop is performed on Performance Management not for each service, but for each logical host.

Set this item to perform a forced stop only when the system cannot be stopped by executing a normal stop command.

Notes:
  • Do not make automatic startup settings for OS startups, since the starting and stopping of the Performance Management registered in the cluster are controlled by the cluster.

  • When running Performance Management in a Japanese or Chinese language environment, configure the cluster software to run a script that sets the LANG environment variable before executing any Performance Management commands. In an environment where the LC_ALL environment variable is set to a different value from the LANG environment variable, either unset the LC_ALL environment variable or change its value to match the LANG environment variable. You can unset LC_ALL by adding the following setting:

    unset LC_ALL

  • If the cluster software determines execution results on the basis of the return values from commands, configure the cluster software to convert the return values from Performance Management commands to what the cluster software expects. For details about the return values from Performance Management commands, see the reference documentation for each command.

  • Before using the ps command for monitoring operations, execute the ps command to confirm that a character string that is a combination of the logical host name and the instance name is correctly displayed. If part of the character string is not displayed, shorten the instance name.

    When you use the ps command to identify the process name and logical host name, the command sometimes fails to acquire the information, in which case the information might appear in square brackets. Read the manual page for the ps command in your operating system and execute the command again.

  • When PFM - Manager is linked with integrated management products (JP1/IM), set dependency relationships in a way that allows services to start and stop in the following order:

- JP1/Base services start before PFM - Manager services start.

- PFM - Manager services stop before JP1/Base services stop.

(b) Checking starting and stopping from the cluster software [Figure] [Figure]

Check whether the cluster software is operating correctly by using it to issue start and stop requests to PFM - Manager or PFM - Web Console on each node.