Hitachi

JP1 Version 12 JP1/Data Highway - Server Configuration and Administration Guide


C.12 Building the cluster system

Organization of this subsection

(1) General procedure for installing JP1/DH - Server on the cluster system

After you make sure that the prerequisites are met, you can install JP1/DH - Server on each of the active and standby nodes.

  1. Make sure that the installation prerequisites are met.

  2. Install JP1/DH - Server.

(2) Installation prerequisites

To install JP1/DH - Server on the cluster system, you need to have the required installation environment, and the environment must be configured appropriately.

(a) Operating system and clustering software

  • Your operating system and clustering software must match one of the clustering software products listed in C.2 Supported clustering software.

  • Any patches, updates, and service packs that JP1/DH - Server and your clustering software require must be applied.

(b) Configuration

  • Each node must have the same environment so that the standby node can take over the existing task after a failover.

  • A cluster must contain two or more nodes.

(c) Disk

  • Files must be protected by an appropriate means, such as a journaling file system, to avoid file loss due to a system down.

(d) Network

  • IP addresses that correspond to host names (returned by the hostname command) must be used for communications. Your clustering software and any other software must not block traffic.

  • Your clustering software and name server must not change the mapping of a host name to an IP address while the JP1/DH - Server system is running.

  • The NIC corresponding to the host name must have the highest priority in the network binding setting. Other NICs, such as a NIC for heartbeats, must have a lower priority.

(e) Shared disk

  • You need to make sure that all the criteria below are fulfilled, so that data written by the active node is not corrupted during failover. If they are not fulfilled, the JP1/DH - Server system might experience problems, such as an error, data loss, and failed startup, resulting in a malfunction of the system.

    • JP1/DH - Server must not be installed on the shared disk.

    • The shared disk must be shared between the active and standby nodes.

    • The shared disk must be allocated to the JP1/DH - Server system before the system starts.

    • The shared disk must not be unallocated while the JP1/DH - Server system is running.

    • The shared disk must be under exclusive control so that it cannot be concurrently accessed by multiple servers.

    • Files must be protected by an appropriate means, such as a journaling file system, to avoid file loss due to a system down.

    • Data written to files must be consistent and inherited by the standby node during a failover.

    • If the shared disk is in use by a process during a failover, the failover must be forced to be completed.

    • Your clustering software must have the ability to start and stop the JP1/DH - Server processes for recovery if necessary, when a failure is detected on the shared disk.

(f) Logical host name and IP address

  • You need to make sure that the criteria below must be fulfilled, so that a recovery action is performed after one of the NICs is failed. If they are not fulfilled, a communication error occurs and the JP1/DH - Server system might malfunction until the NIC is switched over or the active node is failed over to the standby node by, for example, the clustering software.

    • The logical host name must contain only alphanumeric characters and hyphens (-).

    • A logical IP address that can be inherited by the standby node is available for communications.

    • A logical host name must have a one-to-one relationship with a logical IP address.

    • The logical host name must be in the hosts file or in the name server so that TCP/IP communication is possible.

    • The logical IP address must be allocated to JP1/DH - Server before the system starts.

    • The logical IP address must not be removed while the JP1/DH - Server system is running.

    • The mapping of the logical host name to the logical IP address must not be changed while the JP1/DH - Server system is running.

    • Your clustering software or other software must be responsible for a recovery action when a network failure is detected, and the JP1/DH - Server system does not have to be conscious about it. Also, the clustering software must have the ability to start and stop the JP1/DH - Server processes in the process of the recovery action, if necessary.

(g) Port management

The active and standby nodes must have the same port number configured for connecting to the Web server. If they have different port numbers set, a Web browser cannot display the JP1/DH - Server web windows after a failover. When you change the port number, set the same port number for both the active and standby nodes.

(3) Installing JP1/DH - Server on clustered nodes

The steps below describe how to install JP1/DH - Server on the active and standby nodes.

In conditional steps with "In WSFC" or "In HA Monitor", perform those steps only for your matching clustering software.

  1. Make sure that JP1/DH - Server is not installed on the active and standby nodes. If the product is installed on any of the nodes, uninstall it.

  2. Install JP1/DH - Server on the active and standby nodes.

    In WSFC:

    Log in to JP1/DH - Server as a built-in Administrator user.

    In HA Monitor:

    Log in as the root user.

    JP1/DH - Server must be installed to the same folder on the drive with the same name on each of the active and standby nodes.

    To install JP1/DH - Server in the cluster system, you need to perform the new installation procedure. For details, see 5. Installation and Setup.

  3. On the active node, move the database data folder to the shared disk. If the database is up and running, shut down the database before moving the folder.

    From: JP1/DH-Server-installation-folder\PostgreSQL\9.4\data

    To: shared-disk\PostgreSQL\9.4\data

    After the data folder is moved, set permission to access the moved folder.

    In WSFC:

    Grant the full control permission to the postgres account.

    In HA Monitor:

    Use the following commands to change the owner and group to postgres:

    >chown -R postgres /shared-disk/PostgreSQL/9.4/data
    >chgrp -R postgres /shared-disk/PostgreSQL/9.4/data
  4. Change the path to the data folder specified in the database startup option. The path must be changed on both the active and standby nodes. You need to perform different steps to change the startup option depending on your clustering software.

    In WSFC:

    The startup option is defined in the registry key. So you will change the value of the corresponding registry key. For details about the path to the registry key and changes to be made, see C.9 (2) Registry key used in the cluster configuration.

    In HA Monitor:

    The startup option is defined in the JP1_DH_DATABASE_SVR script file to start and stop the service. In the cluster configuration, a separate file, with the shared disk specified in the startup option, is used to start and stop the service.

    The file used in the cluster configuration is stored in the following folder:

    /opt/jp1dh/server/sbin/cluster/

    Change the value of the PGSQLDIR_D variable in the JP1_DH_DATABASE_SVR file to the path to the database data folder on the shared disk.

    After you changed the path, perform the following command on the active node:

    >/JP1/DH-Server-installation-folder/sbin/cluster/JP_DH_DATABASE_SVR start

    The database starts by using the modified JP1_DH_DATABASE_SVR file.

  5. Configure the active and standby server environments. Follow the steps described in 5.3.1 Changing the configuration file. In the cluster configuration, however, some of the configuration steps are different from those in the non-cluster configuration. You must be aware of the following:

    • Specify the logical IP address for the server IP address, instead of the IP address of the host machine. The element in the configuration file is <ip>.

    • Specify the FQDN corresponding to the logical address for the server FQDN, instead of the FQDN of the host machine. The elements in the configuration file are <bind-hostname>, <bind-domainname>, and <bind-sub-domainname>.

    • Specify the path on the shared disk for the storage folder for delivery data. The element in the configuration file is <directory>.

    • Change the audit log output destination folder to the path on the shared disk. Define this folder in the following file:

      installation-folder\misc\digikatsuwide\digikatsuwide\WEB-INF\services\ROOT_SERVICE.srv

      Change the value of the log.file directive in the configuration file to the path on the shared disk.

      log.file = shared-disk\log\jp1dh-audit.log

  6. On the active node, change the JP1/DH - Server application configuration. The configuration on the standby node will be changed in a later step. In this step, only change the configuration of the active node.

    In WSFC:

    In the Start menu for the Windows machine, right-click Command Prompt and select Run as Administrator in the context menu. Perform the steps described in 5.3.2 Changing the application configuration.

    In HA Monitor:

    Perform the steps described in 5.3.2 Changing the application configuration.

  7. Before changing the application configuration on the standby node, change the current owner of resources, including the shared disk and network, from the active node to the standby node.

    In WSFC:

    As described in step 6, change the JP1/DH - Server application configuration on the standby node.

    After the change is done on the standby node, change the current owner of the resources from the standby node to the active node.

    In HA Monitor:

    On the active node, run the following command to shut down the database.

    >/JP1/DH-Server-installation-folder/sbin/cluster/JP_DH_DATABASE_SVR stop

    When the database is stopped, the shared disk needs to be mounted to the standby node so that the standby node can use the disk. To mount the disk to the standby node, use the umount command on the active node, and then use the mount command on the standby node.

    When the task is completed, run the following command on the standby node to start the database.

    >/JP1/DH-Server-installation-folder/sbin/cluster/JP_DH_DATABASE_SVR start

    When all the tasks are completed on the standby node, use the same command described above to shut down the database on the standby node. Then, unmount the shared disk from the standby node and mount it back to the active node. Finally, start the database on the active node.

  8. On the active node, configure settings for the electronic certificate authentication function. Follow the steps described in 5.3.3 Specifying the settings for the electronic certificate authentication function.

  9. Register the root certificate in the active and standby nodes. Follow the steps described in 5.3.4 Registering a root certificate.

    You need to perform this step to encrypt mail server traffic via SSL (SMTPS/STARTTLS) when the delivery notification function is used. Also, perform this step to encrypt directory server traffic via SSL when a directory server is used to authenticate users who try to log in to the system.

  10. Edit the hosts file on the active and standby nodes. Follow the steps described in 5.3.5 Editing the hosts file.

  11. On the active node, create a certificate file used for SSL communication. Follow the steps described in 5.4.1 Creating a secret key file for SSL communication, 5.4.2 Creating a password file, and 5.4.3 Creating a certificate file for SSL communication.

    In the cluster configuration, however, some of the configuration steps are different from those in the non-cluster configuration. Specify the host name (FQDN) corresponding to the logical address for the server host name (FQDN), instead of the IP address of the host machine.

  12. Store all the files created in step 10 in the shared disk.

  13. Edit your JP1/DH Web server configuration files on the active and standby nodes. Follow the steps described in 5.4.4 Editing the settings for the JP1/DH Web server.

    In the cluster configuration, however, some of the configuration steps are different from those in the non-cluster configuration. Specify the host name (FQDN) corresponding to the logical address for the server host name (FQDN), instead of the IP address of the host machine. Also, store the certificate in the shared folder, and set the path to the certificate folder to the path on the shared folder.

    In WSFC:

    To run the batch file deploy_websvr.bat, in the Start menu, find and right-click Command Prompt, select Run as Administrator in the context menu, and run the batch file in the displayed command prompt.

    In HA Monitor:

    Log in as the root user to edit the file.