OpenTP1 Version 7 Description
OpenTP1 provides various facilities for controlling processes. Processes occur in memory that is used when OpenTP1 system services or user servers (UAPs) are executed. A process that is generated by executing a user server is sometimes called a user server process, a UAP process or simply a process.
In the process service definition, you can specify the total number of processes so that the number of system service processes and user server processes is neither more nor less than the number required.
You must start the user server before attempting to control user server processes. The user server can be started:
The main facilities for controlling processes are:
The following sections describe the above facilities in more detail.
OpenTP1 provides the Multiserver facility to enable multiple instances of a service to be executed in parallel and in different processes, in response to multiple requests for the service. When a new service request comes to an already executing user server, a new user server process can be executed for the request.
Not all SPPs can use the Multiserver facility. SPPs that use schedule queues (i.e., servers that receive requests from queues) and MHPs can use the Multiserver facility, but servers that receive requests from a socket cannot use the Multiserver facility. In the user service definition (in the parallel_count operand) specify that a server that receives requests from a socket should use one process only.
A process of a UAP that uses the Multiserver facility can be reserved during OpenTP1 operations or can be reserved dynamically. Processes that are always reserved are called resident processes. Processes that are started when necessary are called non-resident processes.
An advantage of non-resident processes is that they enable efficient use of the memory area in the OpenTP1 system. An advantage of resident processes is that the processing for a user server is performed more quickly than for non-resident processes.
When using the Multiserver facility, in a user service definition you can specify the maximum number of processes to be used. If you specify more than one resident process, only that number of processes will be started and executed in parallel. If you specify more than one non-resident process, only that number of processes will be started dynamically.
If there is no spare system memory, a non-resident process is executed after the finish of any currently executing non-resident process.
Set the number of resident and non-resident processes in the parallel_count operand of the user service definition before starting the user server.
The Multiserver facility can increase or decrease the number of non-resident processes according to the number of service requests in the schedule queue.
The startup timing of non-resident processes depends upon the values specified in the balance_count operand in the user service definition. If there are remaining service requests that exceed (the values specified to the balance_count operand x processes being started), OpenTP1 starts the non-resident processes. If the number of service requests remaining in a schedule queue is less than (the values specified to the balance_count operand x processes being started), OpenTP1 will terminate the non-resident processes.
Figure 3-48 gives an overview of the Multiserver facility based on resident and non-resident processes.
Figure 3-48 Overview of Multiserver facility based on resident and non-resident processes
In the user service definition, you can assign a scheduling priority to each user server. Non-resident processes in a user server that has high scheduling priority are preferentially scheduled compared to non-resident processes in a user server that has low scheduling priority.
Figure 3-49 provides an overview of scheduling priority.
Figure 3-49 Overview of scheduling priority
OpenTP1 can process a heavily-used service on multiple nodes. When many processes are required for processing a service requested from an SPP, OpenTP1 can distribute the processing to SPPs with the same service group name on other nodes. To use this facility for balancing loads among nodes, the following conditions must be satisfied:
Service requests are passed to the user server on a randomly selected node. OpenTP1 references the server information of the node and avoids selecting it if it is hard to schedule. Therefore, even when a schedulable user server exists on the local node, the request is not always passed to that user server. To select the user server on the local node first, specify scd_this_node_first=Y in the schedule service definition. With Y specified, OpenTP1 selects a user server on another node only when it is hard for the user server on the local node to accept the request.
The internode load-balancing facility requires that the operation conditions of the user servers on nodes are the same. If the following conditions differ greatly from node to node that is selected, such an environment is unsuitable for the internode load-balancing facility. In this case, do not place the service groups with the same name in multiple nodes. The conditions that should not differ greatly are:
Each node reports server information to be referenced when allocating a request to the user server. For a system in which SPPs of the same service group are not distributed among multiple nodes, the server information need not be reported. Especially when a public line is used, charges for unnecessary connections will occur. In such a system, specify scd_announce_server_status=N in the schedule service definitions of all the nodes to suppress reporting of server information.
OpenTP1 can distribute loads for SPPs (servers) that receive requests either from a queue or from a socket. When such an SPP is busy, OpenTP1 passes the service requests for the SPP to another user server in another node. The selection of the other node is almost random, except that for servers that receive requests from a queue, OpenTP1 checks the status of the node to be scheduled and controls selection so that it is difficult to select a node that has a low scheduling efficiency. For a server that receives requests from a socket, however, OpenTP1 neither checks the node status nor controls selection.
Figure 3-50 gives an overview of the Internode Load-Balancing facility.
Figure 3-50 Overview of Internode Load-Balancing facility
Service requests are scheduled according to the load level of each node. The following load levels are used:
The load level of each node is checked at each load check interval. At that time, the current load level is determined according to the previous load level, the number of queued service requests, the number of remaining service requests, and the server processing rate.
Table 3-12 shows the conditions that determine the load level.
Table 3-12 Conditions that determine the load level
Previous load level | Number of queued service requests: Q | Number of remaining service requests: q | Server processing rate: X | Current load level |
---|---|---|---|---|
LEVEL0 | Q 1 | -- | X < 50 | LEVEL1 |
LEVEL1 | Q 1 | -- | 75 X | LEVEL0 |
50 X < 75 | LEVEL1 | |||
X < 50 | LEVEL2 | |||
LEVEL2 | -- | q = 0 | -- | LEVEL0 |
q 1 | LEVEL2 |
If the load level is changed, the server information is reported to the name service of each node and the server information is updated. By using the loadlevel_message operand in the user service definition, a message reporting the change of the load level can be output.
This section describes the definitions and processing on the TP1/Server Base and TP1/Client sides and RPC processing when using the internode load-balancing facility.
In the definition on the TP1/Server Base side, either:
or
In the definitions on both server and client sides, either:
or
Table 3-13 shows the operations when the internode load-balancing facility is used with other facilities.
Table 3-13 Operations of the internode load-balancing facility used with other facilities
When using | Operation |
---|---|
Permanent connection by TP1/Client | The CUP execution process of TP1/Server Base performs an RPC in the node that established the permanent connection. This is the same operation as in the case when the server and client are TP1/Server Base. |
Transaction control API by TP1/Client | The transaction delegated execution process of TP1/Server Base performs an RPC. This is the same operation as in the case when the server and client are TP1/Server Base. |
Remote API facility | The RAP-processing server of TP1/Server Base performs an RPC. This is the same operation as in the case when the server and client are TP1/Server Base. |
The user can specify the following items:
Figure 3-51 Scheduling service requests to LEVEL0 nodes
set levelup_queue_count = U1,U2 set leveldown_queue_count = D0,D1
Table 3-14 Numbersof remaining service requests and load levels
Previous load level | Number of remaining service requests: q | Current load level |
---|---|---|
LEVEL0 | q < U1 | LEVEL0 |
U1 q < U2 | LEVEL1 | |
U2 q | LEVEL2 | |
LEVEL1 | q D0 | LEVEL0 |
D0 < q < U2 | LEVEL1 | |
U2 q | LEVEL2 | |
LEVEL2 | q D0 | LEVEL0 |
D0 < q D1 | LEVEL1 | |
D1 < q | LEVEL2 |
In addition to the regular scheduler daemon (called the master scheduler daemon hereafter), you can start multiple daemon processes specialized to receive service requests (called the multi-scheduler daemon hereafter) to receive several service request messages concurrently. This way, you can avoid the scheduling delay due to reception contention. This solution is called the multi-scheduler facility.
To use the multi-scheduler facility, you must specify:
It is also possible to group multi-scheduler daemons by servers that receive requests from a queue. This grouping prevents servers from contending for receiving of service request messages. When multi-scheduler daemons are grouped, you must specify scdmulti in the schedule service definition on the server side.
This facility requires TP1/Extension 1 installed. If TP1/Extension 1 is not installed, the operation is not guaranteed.
Figure 3-52 gives an overview of the multi-scheduler facility.
Figure 3-52 Overview of multi-scheduler facility
All Rights Reserved. Copyright (C) 2006, 2010, Hitachi, Ltd.