Whether or nor pdload can be executed depends on the open attribute of the RDAREAs containing the tables, indexes, and LOB columns subject to data loading, as well as the status of the RDAREAs. For details, see Appendix C. RDAREA Status During Command Execution.
The maximum number of times pdload can be executed concurrently depends on the value of the pd_utl_exec_mode operand.
Normally, synchronization point dumps are not collected during data loading. If other UAPs are executing while data loading is underway and a system failure occurs, the time required for restart increases. Therefore, you should not execute any UAPs during data loading.
Data loading with the synchronization point specification enables you to collect synchronization point dumps at intervals specified as the number of lines. In the event of abnormal termination, this method requires less start time than the normal data loading method that does not collect synchronization point dumps.
Values stored in national character string data are not subject to checking as to whether or not the data contains multi-byte characters.
To input a file created by VOS3 using EasyMT, you must convert it to the character encoding specified in the pdsetup command before using pdload to load the data.
To use multi-volume magnetic tape, you need to install MTguide at the server machine, because it is used for volume switching.
The facility for conversion to a DECIMAL signed normalized number handles the sign part of DECIMAL type, shown as follows. For details about the facility for conversion to a DECIMAL signed normalized number, see the HiRDB Version 9 System Operation Guide.
Sign part | Description |
---|---|
X'C' | Indicates a positive number. |
X'D' | Indicates a negative value. |
X'F' | Indicates a positive value. |
Sign part of embedded variable data | Not normalized | Normalized |
---|---|---|
X'A' | Error | Converted to X'C' |
X'B' | Error | Converted to X'D' |
X'C' | Not converted | Not converted |
X'D' | Not converted | Not converted |
X'E' | Error | Converted to X'C' |
X'F' | Not converted | Converted to X'C' |
X'0' to X'9' | Error | Error |
Sign part of 0 data | Not normalized | Normalized |
---|---|---|
X'A' | Error | Converted to X'C' |
X'B' | Error | |
X'C' | Not converted | |
X'D' | Not converted | |
X'E' | Error | |
X'F' | Not converted |
Thus, if the 0 data is normalized, -0 is converted to +0.
For an input data file in the DAT format that is subject to conversion of character string data, the facility stores the normalized values (sign part is X'C' for positive values and 0 data and X'D' for negative values) regardless of the system definitions.
When the facility for conversion to a decimal signed normalized number is used, the DECIMAL columns are output in the error data dump-image listing that is output to the error information file and in the input data that is output to the error data file as normalized values up to the column where an error was detected by pdload.
For a table containing an abstract data type provided by a plug-in, you can call a constructor function to generate the values for the corresponding columns to load the data. However, this data loading is not possible if the constructor function is not created in the shared library.
For a table containing a user-defined abstract data type, data loading is not possible because the system cannot generate values for the corresponding columns.
UOCs cannot be used to load data from input data files in the binary format to a table containing columns of an abstract data type.
If you have added an RDAREA to a rebalancing table, you cannot load data to the added RDAREA (during data loading in units of RDAREAs) unless you execute pdrbal on the rebalancing table (return code = 0).
For a table partitioned by flexible hash partitioning, the utility stores data and ignores the hash group; therefore, the data rearranged by pdrbal does not link to the hash key values.
You should note the following about executing data loading on falsification prevented tables:
Execution of pdload does not activate a trigger. Before executing pdload, check the contents of the table's trigger definition.
Data loading cannot be performed into temporary tables. If an attempt is made to do so, pdload issues the KFPL15031-E message and terminates with an error.
You can check the result of pdload with pddbst (or by running the UAP or using pdrorg to perform unload processing). When pdload processing is finished, a return code is set. Table 5-52 pdload return codes shows the pdload return codes. Table 5-53 pdload return codes (for audit trail table) shows the return codes when pdload is executed on an audit trail table.
Table 5-52 pdload return codes
Return code | Description | Action |
---|---|---|
0 | Normal termination. All data in the input data file was loaded. | None |
Normal termination. All data in the input data file was output to the divided-input data files. | Use the divided-input data files to perform data loading in units of RDAREAs. | |
4 | Input data error. Data loading was skipped because some data in the input data file was invalid. | See the error information file, correct the erroneous data in the input data file, and then re-execute data loading. |
Input data error. Normal data in the input data file was output to the divided-input data files, but erroneous data was not (when the -e option was specified, data was output to the divided-input data files until erroneous data was detected). | See the error information file, correct the erroneous data in the input data file, and then re-create the divided-input data files. | |
8 | Input data error. When the dataerr operand was specified in the option statement, data loading was rolled back due to an error in the input data file. | Correct the erroneous input data and then re-execute data loading. |
Key duplication error. A key duplication error was detected during batch creation of index. | Restore the database from its backup, correct the erroneous input data, then re-execute data loading. | |
Abnormal termination. Data loading was cancelled due to an error. | Eliminate the cause of the error and then re-execute data loading. | |
Abnormal termination. Data output from input data file to divided-input data files failed. | See the error message, eliminate the cause of the error, and then re-create the divided-input data files. |
Table 5-53 pdload return codes (for audit trail table)
Return code | Description | Action |
---|---|---|
0 | Normal termination. Data was loaded into the audit trail table from all generations of the audit trail files specified in the srcuoc statement. | None |
4 | Warning termination. Data was loaded into the audit trail table from the audit trail files specified in the srcuoc statement that were in data loading wait status, but the files in shutdown or data loading completed status were skipped. | If an audit trail file was skipped because it was in shutdown status, eliminate the cause of the shutdown status and then re-execute data loading. If an audit trail file was skipped because it was in data loading completed status, normally no action is required because data has already been loaded from the file. However, if data loading is needed, specify force in the mode operand in the srcuoc statement and then re-execute data loading. |
8 | Abnormal termination. Data loading was canceled due to an error. | Eliminate the cause of the error and then re-execute data loading. |
To cancel processing during execution of pdload, use the pdcancel command. To terminate pdload forcibly because of a no-response error (such as when a routine job does not complete data loading within the normal amount of time), redirect the display result of the pdls command (with -d rpc -a specified) to a file and then execute the pdcancel -d command.
In this case, if pdload was executing in the creation mode (-d option specified), all the data stored in the table is deleted. If pdload was executing in the addition mode (-d option omitted), processing is rolled back.
If the facility for predicting reorganization time is used and pdload is terminated forcibly by a signal interrupt such as the kill command, the database management table cannot be updated. To terminate pdload while using the facility for predicting reorganization time, make sure that you use the pdcancel command.
When preparing a LOB input file for EasyMT, you need to take into consideration the order in which the columns and rows of the LOB column structure base table are loaded. If you prepare this file randomly without regard to the loading order, a large amount of time may be required for search processing on the LOB input file.
To load data to a table with LOB columns, you need to prepare all the required resources, such as the RDAREAs and buffers of the LOB column structure base table and LOB data, even if you are loading data only to the LOB column structure base table.
The table below shows the file media that are supported during execution of pdload. When a regular file is used, the operating system parameters (kernel parameters) maxfiles, nfile, and nflocks are used during file open processing.
File | Regular File | Fixed-length blocked tape | Variable-length blocked tape |
---|---|---|---|
Input data file | Y | Y# | Y |
Control information file | Y | -- | -- |
Column structure information file | Y | -- | -- |
Null value/function information file | Y | -- | -- |
Error information file | Y | -- | -- |
Error data file | Y | -- | -- |
LOB input file | Y | -- | Y |
LOB middle file | Y | -- | -- |
Index information file | Y | -- | -- |
Work file for sorting | Y | -- | -- |
Process results file | Y | -- | -- |
Y: Can be used
--: Cannot be used
#: Cannot be used for binary format or fixed-size data format.
You should note the following about executing pdload on a database to be extracted that is subject to data linkage:
A large file system enables you to store a file whose size exceeds 2 gigabytes. Table 5-54 Large file support by pdload shows for each file type whether or not pdload supports a large file. Note that the maximum size of file the process can create depends on OS settings. Set the maximum value of system resources for the HiRDB administrator and root user to a value that is greater than the size of file to be created or to unlimited. AIX requires special attention because its default file size is 1 gigabyte. You can use the limit or ulimit OS command to check the limit values for system resources. When you change the file size limitations in AIX, you must also correct the /etc/security/limit file. For details, see the applicable OS and shell documentation. To apply values changed by the root user, the OS must be restarted because HiRDB is a process that is started from the OS's init.
Table 5-54 Large file support by pdload
File type | Support of large file |
---|---|
Input data file | Y |
Error information file | Y |
Error data file | Y |
LOB input file | Y |
LOB middle file | Y |
Index information file | Y |
Work file for sorting | Y |
MT attribute definition file | N |
Process results file | Y |
SQL definition file | Y |
Y: Supported
N: Not supported
When you load data using a DAT-format input file that contains a comma (,) at the end of each row, the data loading may fail due to a column count mismatch. In this case, specify the names of all table columns in the column structure information file and specify the skipdata statement at the end. There is no need to revise the input data.
If coding of the column structure information file is not feasible because of a large number of table columns involved, you can prepare the file with the following method:
PUTFILE TO filename SELECT COLUMN_NAME FROM MASTER.SQL_COLUMNS
WHERE TABLE_NAME ='table-name' ORDER BY COLUMN_ID
When you use the differential index function of the HiRDB Text Search Plug-in, pdload updates the following indexes according to the specification of PDPLUGINNSUB in the client environmental definition:
Presence of existing data | PDPLUGINNSUB specified | ||
---|---|---|---|
Y | N | Not set | |
No existing data (data loading in creation mode) | M | M | M |
Existing data present (data loading in addition mode) | S | M | S |
M: Updates the MASTER index
S: Updates the differential index
When loading data to a table containing a unique key index or primary key index, note the following:
If you have created a list on the basis of a table subject to data loading, and execute a search process on the list after data loading, the following events may occur:
In this case, you need to re-create the list before executing the search.
pdload outputs progress messages to the standard output during processing. In the event of an error, error messages are output to standard error. If pdload is executed in an environment in which output to standard output or standard error is suppressed, pdload may stop responding due to a message output wait, or it may output the KFPL20003-E message to the message log file and terminate abnormally. For this reason, you should not execute pdload in an environment in which messages cannot be output to the standard output or standard error output. Note that the sequence and number of messages output to the standard output and to the standard error output may not match the sequence and number in the message log file and the syslogfile. To obtain the accurate messages, view the message log file or the syslogfile.
To perform data loading on a shared table, the system places the RDAREAs containing the shared table and shared indexes defined for the target table in the EX lock mode. If the corresponding RDAREAs contain other tables and indexes, these tables and indexes cannot be referenced or updated. For details about the lock mode used for data loading on shared tables, see B.2 Lock mode for utilities.
When data is loaded to a table for which referential constraints or check constraints have been defined, pdload does not check for data integrity. For this reason, you should use pdconstck to check data integrity during data loading. For details about how to check data integrity, see the HiRDB Version 9 Installation and Design Guide.
If you load time data or timestamp data, including leap seconds, specify Y in the pd_leap_second operand. When Y is specified in this operand, the range of permissible values for seconds in time data and timestamp data is set to a range from 0 to 61.
When data loading is performed in the creation mode (-d specified), the history of data deletions from tables and indexes is applied to the results of the facility for predicting reorganization time.
If pdload has terminated abnormally, executing the pddbst condition analysis result accumulation facility results in an invalid prediction result# because the reorganization timing cannot be predicted in the data loading completed status. Therefore, if pdload terminates abnormally, re-execute pdload to terminate it normally, then execute the pddbst condition analysis result accumulation facility.
Whether or not options and control statements can be specified depends on the pdload functions being used. The applicable functions are as follows:
Table 5-55 Whether options can be specified when pdload functions are used and Table 5-56 Whether control statements can be specified when pdload functions are used show whether or not options and control statements can be specified when these functions are used. For details about registering data in audit trail tables, see the HiRDB Version 9 System Operation Guide.
Table 5-55 Whether options can be specified when pdload functions are used
Option | Whether or not specifiable | |
---|---|---|
Registering data in audit trail table | Creating divided-input data files | |
-d | O | N |
-a | N | O (specify -a if the input data file is in the fixed-size data format) |
-b | M (specify -b) | N |
-W | M | N |
-w | N | N |
-U | N | N |
-i | O | N |
-l | O | N |
-k | N | N |
{-c|-v} | O (specify -v) | O (specify -c if the input data file is in the fixed-size data format) |
-n | O | N |
-u | O | O |
-x | O | N |
-f | N | N |
-s | N | O |
-e | N | O |
-r | N | N |
-z | O (assumed even when omitted) | O |
-y | O | N |
-o | O | N |
-m | O | O |
-X | O | O |
-q | N | N |
-K | N | N |
-G | N | N |
Table 5-56 Whether control statements can be specified when pdload functions are used
Control statement | Operand | Whether or not specifiable | |
---|---|---|---|
Registering data in audit trail table | Creating divided-input data files | ||
mtguide | -- | N | N |
emtdef | -- | N | N |
source | -- | M (specify (uoc)) | M (specify input data file) |
index | -- | O | N |
idxwork | -- | O | N |
sort | -- | O | N |
lobdata | -- | N | N |
lobcolumn | -- | N | N |
lobmid | -- | N | N |
srcuoc | -- | M | N |
array | -- | O | O |
extdat | -- | N | O |
src_work | -- | N | M |
option | spacelvl | O | O |
tblfree | O | N | |
idxfree | O | N | |
job | N | N | |
cutdtmsg | N | N | |
nowait | O | N | |
bloblimit | N | N | |
exectime | O | O | |
null_string | N | N | |
divermsg | N | N | |
dataerr | N | N | |
lengover | N | O |
If you selected utf-8 or utf-8_ivs as the character encoding in the pdsetup command, you might be able to use a file with a BOM as the input file for pdload. Table 5-57 Whether files with a BOM can be used in pdload (applicable to UTF-8) shows whether or not files with a BOM can be used with pdload. Note that even when a file with a BOM is used as the input file for pdload, the BOM is skipped. No BOM is included in the file that is output by pdload.
Table 5-57 Whether files with a BOM can be used in pdload (applicable to UTF-8)
Option or control statement | Input file | Use of file with a BOM | |
---|---|---|---|
-c | Column structure information file | Y | |
-v | Null/function information file | Y | |
-- | Control information file | Y | |
emtdef | MT attribute definition file | N | |
source | Input data file | DAT | Y |
Extended DAT | Y | ||
Binary | N | ||
Fixed-size | N | ||
Created with pdrorg -W | N | ||
EasyMT information file | N | ||
index | Index information file | N | |
lobdata | LOB input file | N | |
lobcolumn | LOB column input file | N | |
lobmid | LOB middle file | N | |
-- | SQL definition file | N |
If no file output destination is specified in the control information file during execution of pdload, files are output to the directory shown in the table below.
Note that the output destination cannot be specified for SQL definition files in the control information file. The SQL definition files are always output to the directory shown in the table.
Table 5-58 Directories to which pdload outputs files
Control statement#1 | pd_tmp_directory operand in the system definition | ||
---|---|---|---|
Specified | Omitted | ||
TMPDIR environment variable#2 | |||
Specified | Omitted | ||
Specified | Directory or file specified in the control statement | ||
Omitted | Directory specified in pd_tmp_directory | Directory specified in TMPDIR | /tmp directory |