Nonstop Database, HiRDB Version 9 System Operation Guide

[Contents][Index][Back][Next]

9.10 Partitioning the lock pool

Normally, locking is centrally managed for each server, and only a single lock pool is used. Consequently, if a system is executing many transactions concurrently and its operating rate is high, the inter-thread lock wait count for locking becomes the same as the normal concurrent transaction execution count. As a result, the locking process takes time.

You can avoid such a situation by partitioning the lock pool into multiple segments and distributing the locking process. Partitioned lock pools are called lock pool partitions.

When each locked resource uses a different lock pool partition, the locking process is distributed, which can reduce the inter-thread lock wait count for locking. Since locking requires less time, SQL execution time can also be shortened. The following figure shows an example of locking process distribution.

Figure 9-5 Locking process distribution

[Figure]

Explanation
When only a single lock pool is available, the locking operation for locked resources B and C must wait until the locking operation for locked resource A is finished. On the other hand, when multiple lock pools are available, the locking operation for locked resources A, B, and C is carried out concurrently.

However, if processing becomes concentrated on a specific locked resource, the locking process is not distributed. In this case, SQL execution time cannot be shortened since the inter-thread lock wait count for lock pool partitions does not change.

Organization of this section
(1) Specification method
(2) Advantages and disadvantages of distributing the locking process
(3) Notes

(1) Specification method

In the pd_lck_pool_partition operand (or pd_fes_lck_pool_partition operand in the case of a front-end server), specify a lock pool partition count. In this way, you can distribute the locking process.

For details about the pd_lck_pool_partition and pd_fes_lck_pool_partition operands, see the manual HiRDB Version 9 System Definition.

The following figure shows the procedure for determining the value to specify.

Figure 9-6 Procedure for determining the value to specify for the pd_lck_pool_partition operand

[Figure]

(a) Initial configuration

Omit the pd_lck_pool_partition operand (for front-end servers, omit the pd_fes_lck_pool_partition operand).

Tuning the lock pool partition count after the system is initially built changes the amount of shared memory used. Therefore, if there is a possibility that tuning will occur, we recommend that you calculate the shared memory usage beforehand and allocate the necessary physical memory, and set up OS parameters.

(b) Tuning

To tune the lock pool partition count, the following information is required:

To acquire the tuning information, execute applications using the concurrent execution count and operating rate that assume actual job situations. Based on the acquired tuning information, gradually increase the lock pool partition count and set it to a value that minimizes the total SQL execution time for the entire job application.

However, in the following cases, return the lock pool partition count to the pre-tuning value and try to improve performance using some other method:

When the locking process is distributed, the lock pool partition to be used for each locked resource is determined based on hash partitioning. Consequently, the lock pool partitions used might become unbalanced for some locked resources. When the lock pool partitions become unbalanced, the following situations might occur:

(2) Advantages and disadvantages of distributing the locking process

The following table shows the advantages and disadvantages of distributing the locking process for each SQL.

Table 9-4 Advantages and disadvantages of distributing the locking process

SQL When the lock pool partition count is increased When the lock pool partition count is decreased
Advantage Disadvantage Advantage Disadvantage
SQL that does not commit or roll back when processing is completed (most data manipulation SQLs and some control SQLs) SQL performance improves during concurrent execution. Stand-alone performance of SQL deteriorates. Stand-alone performance of SQL improves. SQL performance deteriorates during concurrent execution.
SQL statements that commit or roll back simultaneously upon processing completion (COMMIT, ROLLBACK, definition SQL, PURGE TABLE, and so on) -- SQL performance deteriorates during concurrent execution. SQL performance improves during concurrent execution. --

Legend:
--: Not applicable

(3) Notes