OpenTP1 Version 7 Description

[Contents][Glossary][Index][Back][Next]

6.2.1 Overview of the Multinode facility

In a cluster system or parallel-processing system, multiple server machines connected in a LAN build one large-scale system and operate in parallel. The Multinode facility enables all OpenTP1 systems in a cluster system or parallel-processing system to be operated from one node, thereby decreasing the amount of work required to manage each node in the system.

When using the Multinode facilities, TP1/Multi is needed for all the OpenTP1 systems.

The following figure shows the software configuration of an OpenTP1 system that uses the multi-node facility.

Figure 6-4 Software configuration of OpenTP1 that uses the Multinode facility

[Figure]

Organization of this subsection
(1) Prerequisites for the Multinode facility
(2) Relation between the Multinode facility and other OpenTP1 facilities
(3) Cluster system or parallel-processing system components

(1) Prerequisites for the Multinode facility

When the Multinode facility is used, the following items are required:

(2) Relation between the Multinode facility and other OpenTP1 facilities

The following facilities are available when OpenTP1 is used in a cluster system or parallel-processing system configuration.

(a) Single OpenTP1 configuration

In a single OpenTP1 configuration, you can operate ordinary OpenTP1 facilities. UAPs can start in environments set in their OpenTP1 node. Both the following OpenTP1 facilities are available: the Multiserver facility, which starts more than one UAP server process at one time; and the Internode Load-Balancing facility, which assigns processing for a service group to more than one node.

(b) Double OpenTP1 configuration using the System Switchover facility

In a double OpenTP1 configuration constructed with the System Switchover facility, the set of OpenTP1 instances in a System Switchover configuration is regarded as one node. If the systems switch, the node is considered the same as the former node. Some TP1/Multi facilities (commands, APIs) are unavailable for OpenTP1 instances using the System Switchover facility.

When using the System Switchover facility, a maintenance LAN, one not used for OpenTP1 communication or usual system switching, is required. Specify the host name of the maintenance LAN as the host name in the multinode physical definition.

(3) Cluster system or parallel-processing system components

When OpenTP1 is used in a cluster system or parallel-processing system, the areas that contain multiple systems are managed as described below so that the individual nodes can be managed.

The following figure shows how OpenTP1 is configured when used in a cluster system or parallel-processing system.

Figure 6-5 OpenTP1 configurations in a cluster system or parallel-processing system

[Figure]

(a) Multinode area

The set of OpenTP1 instances in a cluster system or parallel-processing system constitute a multi-node area. All OpenTP1 nodes to be centrally managed within the system are contained in the multi-node area.

Such a system can have only one multi-node area.

(b) Multinode subareas

The multi-node area is logically divided into a set of multi-node subareas. OpenTP1 nodes in the multi-node area that perform the same type of processing are grouped into the same subarea. For example, the multi-node area may be divided into the following subareas:

(c) OpenTP1 nodes

OpenTP1 systems that constitute a multinode area or a multinode subarea are called OpenTP1 nodes. Each OpenTP1 node is distinguished by a node identifier specified in the OpenTP1 system common definition.

(d) System definitions

In the multinode configuration definition, you can specify which OpenTP1 nodes belong to each area. As an OpenTP1 node identifier, specify the node identifier you specified in the system common definition.