OpenTP1 Version 7 Programming Guide

[Contents][Index][Back][Next]

1.3.6 User server load balancing and scheduling

This subsection explains the multiserver facility provided for efficient use of UAPs (user servers) and how to schedule UAPs.

When OpenTP1 system services or user servers are run, the OS work area is used. Operation performed on this work area is called a process. The process generated by running a user server is specifically called the user server process, the UAP process, or simply the process. OpenTP1 controls the total number of processes in use so that the number of processes will not increase or decrease beyond appropriate levels.

Before the user server process can be controlled, the user server must be started. The user server must be started at the same time as OpenTP1 or by executing the dcsvstart -u command.

Organization of this subsection
(1) Multiserver facility
(2) Resident and nonresident processes
(3) Method for process setup
(4) Multiserver load balance
(5) Schedule priority
(6) Internode load-balancing facility
(7) Extended internode load-balancing facility
(8) Multi-scheduler facility

(1) Multiserver facility

When a user server running to handle a service request receives another service request, user server processing for the new service request can be performed by a new process. In this way, one user server can run another process in parallel to the current process. This is referred to as the multiserver facility.

The multiserver facility is available to SPPs that use the schedule queue (queue-receiving servers), not to servers that receive requests from socket. For a server that receives requests from socket, specify only one process to be used.

(2) Resident and nonresident processes

UAP processes for which multiserver facility is specified can be acquired either always during OpenTP1 operation or dynamically. An always acquired process is called a resident process. A process which is not acquired during OpenTP1 operation, but is started when necessary is called a nonresident process.

If processes are specified as nonresident, the memory area in the OpenTP1 system can be used efficiently. When a process is specified as resident, its user server processing is quicker than when it is specified as nonresident.

If free memory space is unavailable in the system, a nonresident process will start after the currently running nonresident process terminates.

(3) Method for process setup

The number of processes to be started by the user server as resident/nonresident processes is set up in advance. The specified number of processes can be started in parallel. The setup method is as follows:

If more than one resident process is specified, the specified number of processes will be started in parallel. If more than one nonresident process is specified, the specified number of processes can be started dynamically.

(4) Multiserver load balance

The number of nonresident processes can be increased or decreased according to the number of service requests in the schedule queue. This is called the multiserver load balancing facility.

When to start a nonresident process is determined by the value assigned to the balance_count operand in the user service definition. When the number of service requests in the schedule queue exceeds the product of the value assigned to the balance_count operand and the number of active processes, the OpenTP1 starts a nonresident process. When the number of service requests in the schedule queue drops below the product of the value assigned to the balance_count operand and the number of active processes, the OpenTP1 terminates a nonresident process.

The method for specifying the value determining when to start a nonresident process is as follows:

(5) Schedule priority

Each user server can be given a schedule priority. Nonresident processes of a user server given a higher schedule priority will be scheduled with priority over other nonresident processes.

When processes to be used with a user server are set up, their schedule priorities are also set up.

The figure below shows a process load balancing.

Figure 1-24 Process load balancing

[Figure]

(6) Internode load-balancing facility

When user servers having the same service group name are placed on multiple nodes, a service request can be handled by any user server on any node. As the result, the load can be distributed among nodes. This facility is called internode load-balancing facility. Particular environment setup is not required to use this facility. OpenTP1 distributes the load automatically if only the user servers having the same service group name on these nodes are activated.

Consider a service group in the OpenTP1 system which contains some user servers that use the multi-scheduler facility, and some that do not. In this case, even when there is a significant load on the user servers that use the multi-scheduler facility, the load is not distributed to the user servers that do not use the multi-scheduler facility. To distribute the load to user servers that do not use the multi-scheduler facility, specify the -t option in the scdmulti definition command of the schedule service definition. For details on the scdmulti definition command, see the description of the schedule service definition in the manual OpenTP1 System Definition.

The internode load-balancing facility can distribute loads to 128 or less nodes.

The internode load-balancing facility distributes the load to the node which can process the request more efficiently according to the schedule status of the nodes. When the user server on the node which contains the UAP requesting the service is to be scheduled with priority, specify Y in the scd_this_node_first operand of the schedule service definition for the node.

The figure below shows the outline of the internode load-balancing facility.

Figure 1-25 Outline of internode load-balancing facility

[Figure]

(7) Extended internode load-balancing facility

You can define the following specifications:

You can define the number of retries attempted in order to schedule requests to nodes other than where a communication error occurred by specifying an appropriate value in the scd_retry_of_comm_error operand of the schedule service definition.

TP1/Extension 1 must be installed before you can use this facility. Note that operation will be unpredictable if you run the facility while TP1/Extension 1 is not installed.

(8) Multi-scheduler facility

When a client UAP requests a service provided by a queue-receiving server (SPP that uses the schedule queue) on a remote node, the scheduler daemon on the node containing the request destination server receives the service request message and stores it in the schedule queue of the relevant queue-receiving-server. A scheduler daemon is a system daemon that provides a schedule service.

The scheduler daemon is a single process provided on each OpenTP1 system. Therefore, as systems become larger and machines and networks boast increasingly better performances, the conventional scheduler daemon may experience difficulty scheduling messages efficiently. If the conventional scheduler daemon cannot schedule messages efficiently, see C. Examples of System Configurations Requiring Consideration of the Multi-Scheduler Facility.

OpenTP1 provides a daemon exclusively for receiving service requests (referred to below as the multi-scheduler daemon) in addition to the conventional scheduler daemon (referred to below as the master scheduler daemon). The multi-scheduler daemon prevents scheduling delays caused by contention during receive processing. It does this by starting multiple processes and running receive processing for service request messages in parallel. This facility is called the multi-scheduler facility.

TP1/Extension 1 must be installed before you can use this facility. Note that operation will be unpredictable if you run the facility while TP1/Extension 1 is not installed.

To use the multi-scheduler facility, you must specify the following definitions:

On the RPC receiver:
Schedule service definition scdmulti
User service definition scdmulti

On the RPC sender:
User service definition multi_schedule

You can also group several multi-scheduler daemons for each queue-receiving server. This prevents contention when different servers receive service request messages. To group and use multi-scheduler daemons, you must specify the -g option in the user service definition scdmulti on the server.

When OpenTP1 starts, it starts the multi-scheduler daemon specified in the definition at the well known port number in addition to the master scheduler daemon. It starts the multi-scheduler daemon as a system daemon providing schedule services. For details on requesting a service using the multi-scheduler facility provided by TP1/Client, see the manual OpenTP1 TP1/Client User's Guide TP1/Client/W, TP1/Client/P.

For details on RPC that uses the multi-scheduler facility, see 2.1.16 RPC with the multi-scheduler facility.

The figure below shows an example of using the multi-scheduler facility.

Figure 1-26 Example of using the multi-scheduler facility

[Figure]