Scalable Database Server, HiRDB Version 8 UAP Development Guide
The HiRDB/Parallel Server divides query processing of SQL statements into three main steps and executes the statements. Figure 4-9 shows the query processing method of SQL statements in the HiRDB/Parallel Server.
Figure 4-9 SQL statement query processing in a HiRDB/Parallel Server
In the HiRDB system, the floatable servers that are used in step 2 automatically allocate the back-end servers that are not accessed with SQL statements to those individual SQL statements. However, if the SQL optimization option is specified, the allocation method for floatable servers can be changed.
The optimization methods related to floatable server allocation are shown as follows. For details about these optimization methods, see (2) as follows:
When the next optimization method is applied, the number of allocated floatable servers can be increased to the maximum value. The features of this optimization method are described in (3).
Table 4-5 shows the optimization features related to floatable server allocation.
Table 4-5 Optimization features related to floatable server allocation
Optimization method | Advantages | Disadvantages |
---|---|---|
Omitted | When an SQL statement that fetches data is executed concurrently from the same back-end server, search processing can be executed rapidly because load-imposing processes such as sorting are not allocated to back-end servers for fetching data. | The communication load increases because back-end servers that do not fetch data are allocated as floatable servers. |
Increasing the target floatable servers (back-end servers for fetching data) | When this method is combined with "increasing the number of floatable server candidates," the effectiveness of parallel processes such as sorting increases in the floatable servers because all back-end servers are allocated as floatable servers. | When multiple SQL statements are executed concurrently, the concurrent execution performance drops because multiple processes are allocated to the same floatable server. The communication load also increases. |
Limiting the target floatable servers (back-end servers for fetching data) | Work division of the back-end servers can be implemented based on table definitions because only those back-end servers in which tables to be used for searching are defined are allocated as floatable servers. | If a large volume of data is stored in a table that has few partitions, all back-end servers cannot be used effectively because the number of floatable servers that can be used decreases. |
Separating data collection servers | When data is sent to a data collection server from both that server and a separate server at the same time, the transfer from the same server has priority. Therefore the processing for the separate server is performed later. When separating data collection servers is applied, data is accepted equally for all servers because all servers can be treated as individual servers. | When a single SQL statement contains multiple queries, such as set operations and searches involving subqueries, the concurrent execution performance drops because the same floatable server is used for all of those queries. |
Table 4-6 shows the optimization features related to the number of floatable server allocation candidates.
Table 4-6 Optimization features related to number of floatable server allocation candidates
Optimization method | Advantages | Disadvantages |
---|---|---|
Omitted | In searches involving many data items, more servers are allocated as floatable servers. In searches involving few data items, fewer servers are allocated as floatable servers. | If a narrowing predicate such as = or BETWEEN is specified in the search conditions, the HiRDB system judges that the number of data items is low and automatically reduces the number of allocated floatable servers. If the = or BETWEEN specification does not actually narrow the search, the processing load on the servers increases. |
Increasing the number of floatable server candidates | In searches involving many data items, the HiRDB system uses all floatable servers so that the search can be performed efficiently. | If the number of data items is small, the concurrent execution performance for SQL statements drops because the HiRDB system uses all floatable servers. Also, when there are many table partitions, the communication load increases because the communication paths between the servers become complex. |
Figure 4-10 shows floatable server allocation when the optimization method is omitted.
Figure 4-10 Floatable server allocation when the optimization method is omitted
Figure 4-11 shows floatable server allocation when increasing the target floatable servers (back-end servers for fetching data).
Figure 4-11 Floatable server allocation when increasing the target floatable servers (back-end servers for fetching data) is applied
Figure 4-12 shows floatable server allocation when limiting the target floatable servers (back-end servers for fetching data) is applied.
Figure 4-12 Floatable server allocation when limiting the target floatable servers (back-end servers for fetching data) is applied
Figure 4-13 shows floatable server allocation when separating data collecting servers is used.
Figure 4-13 Floatable server allocation when separating data collecting servers is applied
All Rights Reserved. Copyright (C) 2007, Hitachi, Ltd.