Specifies whether or not to execute space conversion on the input data.
Available space conversion levels are 0, 1, and 3.
When the spacelvl operand is omitted, the system executes space conversion according to the value of the pd_space_level operand in the system common definitions.
- 0
- The system does not execute space conversion on the input data.
- 1 or 3
- The system executes space conversion on the input data. Values of 1 and 3 have the same effect.
- If the table columns subject to data loading are of the national or mixed character string type, the system converts the input data and stores the data in the database as follows:
- National character string type
The system converts two consecutive single-byte spaces in the input data to one double-byte space, in units of two bytes from the top.
- Mixed character string type
The system converts each double-byte space in the input data to two consecutive single-byte spaces.
When the character encoding is set to utf-8, the system converts one double-byte space (3 bytes) to two single-byte spaces. For MCHAR, the system adds trailing single-byte spaces up to the definition length. For MVARCHAR, the data length remains shortened.
- If the data is to be stored via a constructor function for the columns of abstract data type and the data for the corresponding constructor function's argument has the national or mixed character string type, the system executes space conversion before passing the input data as an argument to the constructor function.
- Criteria
- Specify this operand to make all spaces in the table data either single- or double-byte characters.
- For details about how to make all spaces in table data either single- or double-byte characters, see the HiRDB Version 9 System Operation Guide.
- Note
- When executing space conversion using the spacelvl operand or the system common definitions, note the following:
- The system checks the order of cluster key values after the data conversion; therefore, the cluster key sequence checking may result in an error even if the data was sorted by the cluster key values before conversion. In this case, specify the -x option to skip the cluster key sequence checking. To store data in the order of cluster key values, reorganize the data after data loading is completed.
- If the column used as the unique key index or primary key index is subject to space conversion, duplicate key values may result in the index. In this case, take one of the following actions:
Unique key index: Either remove the unique index specification beforehand or modify the corresponding data in the input data.
Primary key index: Modify the corresponding data in the input data.
- A comparison value for the null value specified in the column structure information file or null value/function information file is not subject to space conversion. In this case, the comparison value for the null value is compared to the input data before space conversion.
- The length of the input data before space conversion is used for checking the input data length.
(b) tblfree = {percentage-of-unused-space|([percentage-of-unused-space], percentage-of-free-pages-in-segment)}
Specify this operand to change the percentage of free space specified with CREATE TABLE (value of PCTFREE) for storing data during data loading.
- percentage-of-unused-space
- You can specify a value in the range from 0 to 99.
- percentage-of-free-pages-in-segment
- You can specify a value in the range of 0 to 50.
- Criteria
- If you have specified a non-zero value as the percentage of free space during table definition, a shortage of RDAREA may occur while data is stored because the defined percentage of free space takes effect during data loading. If you specify the tblfree operand in this case, you can complete the data loading without having to extend the RDAREA temporarily.
- Note
- It is more advantageous to specify this operand than the -y option to store data in free space for the following reasons:
- Batch input/output operations take effect even when data is stored in free space.
- You can specify p in the -l option.
(c) idxfree = percentage-of-unused-area
If you are creating indexes in batch index creation mode, specify this operand to change the percentage of unused area specified with CREATE INDEX (value of PCTFREE).
The permitted value range is from 0 to 99.
- Criteria
- If you have specified a non-zero value as the percentage of unused space during index definition, a shortage of RDAREA may occur while indexes are stored because the defined percentage of free area takes effect during batch index creation processing. If you specify the idxfree operand in this case, you can complete the batch index creation processing without having to extend the RDAREA temporarily.
(d) job = job-name[,rows-count-between-synchronization-points][,CLR]|,,CLR}
Specify this operand to execute data loading with the synchronization point specification.
Data loading with the synchronization point specification is a method of data loading in which a transaction is settled each time the specified number of data items are stored. If the log acquisition mode (-l a) or pre-update log acquisition mode (-l p) is used to acquire log information and an error occurs while data is being stored, this method enables you to restore the database in a short period of time without having to store all the data from the beginning. For details about error handling, see Handling utility abnormal termination errors during data loading with the synchronization point specification in the HiRDB Version 9 Installation and Design Guide.
Note that you cannot specify this operand when you perform data loading on an audit trail table. This is because COMMIT requests are issued for the specified number of data items (because the utility automatically executes a COMMIT for each generation of an audit trail file). If this operand is specified in such a case, a control statement error will result.
- job-name ~<alphanumerics> ((1-3))
- Specify the name of the job that executes data loading with the synchronization point specification.
- This job name is used during re-execution in the event the utility terminates abnormally during data loading. If this job name is the same as another pdrorg job name, re-execution results in a malfunction. Therefore, be sure to specify a unique job name.
- rows-count-between-synchronization-points ~((1-1000)) <<100>>
- Specify the number of data items as an interval at which a transaction is to be settled. For example, a value of 100 sets a synchronization point at every one million data items.
- A small value settles a transaction after a small number of data items; therefore, rollback processing would take a short time in the event of abnormal termination. However, transaction generation and settlement and process restart occur frequently, resulting in a large overhead. On the other hand, a large value has a little effect on the performance, but the error recovery time would be long.
- An error results if you specify only the number of rows between synchronization points without specifying a job name.
- CLR
- Specify this operand if there is no need to re-execute pdload or if pdload is to be re-executed after clearing the current synchronization point information in the data base.
- If the utility terminates abnormally, the synchronization point information remains in the database. If this information is retained during the next execution, the utility may not function correctly.
- Note that if you specify CLR, the system clears all synchronization point information, whether it was specified by another user or utility.
- To determine if the synchronization point information is present in the database, use the database condition analysis utility (pddbst) to execute condition analysis in units of tables.
- Criterion
- This operand is useful for loading a large amount of data that will take a considerable amount of time to store the data. However, this has adverse effects on performance due to synchronization point processing. Additionally, more data pages are required than in normal data loading because data storage begins on a new page at each synchronization point.
- Example of job operand specification
- Loading data with a synchronization point specified for every one million data items
job=JOB,100 or job=JOB
- Clearing the current synchronization point information and specifying new synchronization points at every one million data items to execute data loading
job=JOB,100,CLR or job=JOB,,CLR
- Clearing the current synchronization point information and executing normal data loading
job=,,CLR
- Notes
- Synchronization point information is managed by the job name. Suppose that pdload(A) that was executing data loading with the synchronization point specification terminates abnormally. If you execute pdrorg(B) specifying the same job name for the same table, pdrorg(B) uses the synchronization point information that was left by pdload(A). Therefore, the job names must be controlled so that they are not duplicated.
- If data loading with the synchronization point specification terminates abnormally, you cannot execute the normal data loading (without the synchronization point specification) on the same table until you re-execute the data loading with the synchronization point specification having the same job name.
- The synchronization point information is managed in units of RDAREAs. For a row-partitioned table, inconsistency occurs on the synchronization point information for the entire table in the following cases:
Only some of the RDAREAs were re-initialized.
Error occurred during data loading with the synchronization point specification in units of RDAREAs.
If the job name is the same, but the number of lines does not match, you cannot use the synchronization point information managed by each RDAREA for re-execution. If some of the RDAREAs have synchronization point information while the others do not, you can re-execute data loading using the retained synchronization point information. The following shows how the system handles data loading when the job name for synchronization point information is the same but the number of lines does not match:
Retained synchronization point information (number of lines between synchronization points) |
Input begin line for data loading with synchronization point specification in units of tables |
RDAREA 1 |
RDAREA 2 |
RDAREA 3 |
0 |
0 |
0 |
Item 0 |
0 |
n |
0 |
Item n |
n |
0 |
m |
Cannot be executed. |
n |
m |
p |
Cannot be executed. |
0: No synchronization point information retained
n, m, p: Retained number of lines between synchronization points:
- If data loading with the synchronization point specification terminates abnormally, you cannot modify the index creation method during re-execution (-i option).
- If EasyMT is remounted before data loading with synchronization point specification terminates abnormally, mount volume 1 again during the re-execution.
- If data loading with the synchronization point specification terminates abnormally, and if you re-initialize RDAREAs or execute the PURGE TABLE statement before re-execution, the existing synchronization point information is deleted.
- If data loading with the synchronization point specification terminates abnormally during the batch creation of a plug-in index, and the corresponding plug-in index supports the batch plug-in index creation partial recovery facility, execute pdrorg to create the plug-in index that resulted in an error, then re-execute pdload. If you re-execute pdload without executing pdrorg, the entire plug-in index is re-created.
- The maximum number of data items that can be controlled during data loading with the synchronization point specification is 4,294,960,000. To load more data items than this, use multiple input data files and execute pdload more than once.
If you are storing character string data from the DAT-format input data file into the columns with the following data types during data loading, and the input data is longer than the defined column length, the excess part of the data is discarded:
- Character string data type (CHAR, VARCHAR)
- National character string data type (NCHAR, NVARCHAR)
- Mixed character string data type (MCHAR, MVARCHAR)
- BINARY data type
This operand specifies whether or not to output a warning message (KFPL31090-W) to the error information file when this data truncation occurs.
on: Output warning messages.
off: Do not output warning messages.
- Rules
- If the -e option is specified, the system assumes cutdtmsg=off.
- For an input data file other than in the DAT format, this operand, if specified, is ignored.
- If cutdtmsg=on is specified and data truncation occurs, pdload returns code 4, even when all data has been stored.
- A warning message is output for each data item resulting in truncation (each column and element), not for each line. Therefore, if you are specifying cutdtmsg=on, check the disk space, because many warning messages may be output.
- The system outputs warning messages to the error information file, but it does not output the corresponding input data to the error data file.
Specifies whether or not a NOWAIT search is to be conducted on the table subject to data loading. The NOWAIT search is an SQL function for executing a search process with the NOWAIT lock option specified.
yes: Execute NOWAIT search.
no: Do not execute NOWAIT search.
- Rules for nowait=yes
- You cannot specify the batch index creation mode (-i c). Specify the index update mode (-i s).
- You cannot use the local buffer (-n option).
- The system does not support data loading in units of RDAREAs.
- In the creation mode (-d option specified), you cannot execute a NOWAIT search on the table while data is being deleted (if executed, processing is placed in lock-release wait status). You can execute a NOWAIT search while data is being stored.
- For a table with columns of abstract data type provided by a plug-in, you can execute a NOWAIT search if the plug-in provides the no-lock search facility. To see if the plug-in provides the no-lock search facility, see the applicable plug-in manual.
- When loading data only to the LOB column of a table, you can reference the LOB column structure base table but not the LOB columns, regardless of this operand specification. To reference the LOB columns, execute data loading on both the LOB column structure base table and LOB columns at the same time.
- In the event of a pdload rollback due to abnormal termination, segments will not be released once they are allocated during data loading. These segments are reused by pdload during re-execution, but not by a UAP's INSERT or UPDATE statement. Therefore, if you execute a UAP's INSERT or UPDATE statement prior to re-execution of pdload, the segments that are not reusable by pdload or UAP will be retained until the corresponding RDAREA runs out of all unused segments. To release such segments, you need to reorganize the table with pdrorg. To avoid this, place the RDAREA in reference-possible shutdown status, then execute pdload. (No UAP will be able to access the table even if pdload terminates abnormally.)
- If specified, yes is ignored for a shared table (no is assumed).
~<unsigned integer> ((1-2,097,152))
If you are executing either of the following types of data conversion using a pdrorg-output binary-format input data file, you might need to retain data in memory:
- Converting data for BLOB parameter with abstract data type to a BLOB-type column
- Converting BLOB data to an abstract data type column with BLOB parameter
In this case, specify the size of the memory area to retain data in KB. The system compares this operand value with the BLOB parameter or the defined length for BLOB type and allocates the area based on the smaller size.
If you are using pdload unload files to load data into a table with BLOB-type columns, specify this operand to allocate the area on the basis of the actual length of the BLOB data stored in the database, not on the length defined for the BLOB type. You can use an SQL statement to obtain the maximum length of BLOB data from the unloaded table. The buffer is shared among multiple BLOB-type columns. The following shows the actual size of the area that is allocated:
MIN(sum of the lengths defined for all BLOB-type columns in the table, bloblimit operand value) |
If the total length of BLOB data becomes larger than the size of the allocated area, processing is stopped.
- Criteria
- Specify this operand if either one of the following conditions is true:
- When converting data for a BLOB parameter with abstract data type to a BLOB-type column, the data for the BLOB parameter is followed by the data to be stored in the abstract data type column with BLOB parameter.
- When converting BLOB data to an abstract data type column with BLOB parameter, the BLOB data is preceded by the data to be stored in the BLOB-type column.
- Rules
- This operand is applicable only when pdload unload files are used.
- If there is a data item that exceeds the specified size, an error results.
~<unsigned integer> ((0-35791394))
Specifies an interval in minutes for monitoring the pdload execution time. If 0 is specified, the execution time of pdload is not monitored. If pdload does not terminate within the specified amount of time, the system terminates the pdload control process forcibly and acquires error information in order to determine the cause of the failure to respond.
If this operand is omitted, a value is selected from the system definition in the order listed below and that value takes effect:
- pd_utl_exec_time
- pd_cmd_exec_time
- Criterion
- If you specify a monitoring interval for processing such as nighttime batch processing, the processing will terminate abnormally if an error such as a communication error (including a transient error) or a disk error occurs during execution of pdload and pdload is placed in no-response status. This enables you to detect the abnormality and start recovery processing at an early stage.
- Guidelines for the specification value
- The purpose of this operand is to detect a no-response error, not to monitor the execution time of a long-running transaction. Therefore, the operand value must be large enough for the applicable table processing. For example, to monitor the execution time of a pdload that should terminate within 7- 8 minutes, specify exectime=20, not exectime=10. When you are executing data loading in the addition mode, specify a larger value because the execution time will vary depending on the number of base data items even when there is a predefined number of data items in the input data file (in the case of a table whose data increases monotonously, you should check and, if necessary, revise the specified value).
Specifies whether the default value set in the DEFAULT clause or the null value is to be stored when the input data is the null value ("*" or omitted) during data loading on a table with the DEFAULT clause specified.
This operand is applicable to a DAT-format input data file. An error results if this operand is specified for an input data file in the binary, fixed-length data, or pdrorg-generated binary format.
- null: Store the null value.
- default: Store the default value set in the DEFAULT clause.
Specifies that data storage processing (rollback) is to be ignored when an input data error (logical error) is detected.
Table 5-15 Relationship between the -e and dataerr operands shows the relationship between the -e and dataerr operands. Table 5-16 Notes about specifying the dataerr operand together with other operands and control statements provides notes about specifying the dataerr operand together with other options and control statements.
Table 5-15 Relationship between the -e and dataerr operands
-e option |
dataerr operand in the option statement |
pdload operation when input data error is detected |
Transaction settlement method |
pdload's return code |
Omitted |
Omitted |
Skips database storage processing for the erroneous data and resumes processing. |
Commit |
4 |
Specified |
Cancels processing when the erroneous data is detected. |
Rollback |
8 |
Specified |
Omitted |
Cancels processing when the erroneous data is detected. |
Commit |
4 |
Specified |
Cancels processing when the erroneous data is detected. |
Rollback |
8 |
Table 5-16 Notes about specifying the dataerr operand together with other operands and control statements
Option and control statement |
Note about specifying together with the dataerr operand |
-d |
A control statement error occurs. |
-l |
Specify either a or p. |
-i |
If all the following conditions are satisfied, index unfinished status results after rollback; therefore, specifying this option together with the dataerr operand results in a control statement error:
- Loading of data is in units of RDAREAs
- The number of RDAREAs storing non-partitioning key indexes does not match the number of table storage RDAREAs.
- The value of the -i option is not s
Consider changing the pdload options, control statements, or index definition so that not all the above conditions are satisfied. |
If a key duplication error may occur during data loading, specify s. If any other value is specified, data stored in the table will not be rolled back even if a key duplication error is detected. |
-k |
If f, v, or c is specified, the unneeded LOB middle file remains after rollback. Before re-executing pdload, delete this LOB middle file. |
If f, v, or c is specified and an error is detected during data loading on a LOB column, the LOB column structure base table is committed and the LOB column is rolled back, regardless of the dataerr operand specification. In this case, correct the LOB data and re-execute data loading. |
-e |
Regardless of the option specification, processing is cancelled and then rolled back when erroneous data is detected. |
Other option |
None |
job operand in the option statement |
A control statement error occurs. |
RDAREA name in the source statement |
If all the following conditions are satisfied, the index unfinished status results after rollback; therefore, specifying this option together with the dataerr operand results in a control statement error:
- Loading of data is in units of RDAREAs
- The number of RDAREAs storing non-partitioning key indexes does not match the number of table storage RDAREAs.
- The value of the -i option is not s
Consider changing the pdload options, control statements, or index definition so that not all the above conditions are satisfied. |
error operand in the source statement |
Information about the detected error is output and then processing is rolled back. |
errdata operand in the source statement |
A control statement error occurs. |
Other control statements |
None |
Specifies that an input data error is to be detected when the input data in a DAT-format (including extended DAT format) input data file that is to be stored in any of the following data-type columns is longer than the defined column length:
- CHAR
- VARCHAR
- NCHAR
- NVARCHAR
- MCHAR
- MVARCHAR
- BINARY
Table 5-17 Relationships among input data length, defined column length, and lengover operand shows the relationships among the input data length, defined column length, and the lengover operand. Table 5-18 Notes about specifying the lengover operand together with other operands and control statements provides notes about specifying the lengover operand together with other options and control statements.
Table 5-17 Relationships among input data length, defined column length, and lengover operand
Relationship between input data length and defined column length |
lengover omitted |
lengover specified |
m=n |
Stores the input data as is. |
m<n |
Discards the trailing end of the input data beginning at location m + 1 and then stores the remaining part of the input data. |
Sets an input data error. |
m>n |
- CHAR, NCHAR, and MCHAR:
- Pads the trailing end of the input data with spaces beginning at location n + 1 and then stores the input data.
- VARCHAR, NVARCHAR, MVARCHAR, and BINARY:
- Stores the input data as is.
|
Table 5-18 Notes about specifying the lengover operand together with other operands and control statements
Option and control statement |
Note about specifying together with the lengover operand |
-a, -b |
A control statement error occurs because the lengover operand is applicable to the DAT format. |
Other option |
None |
cutdtmsg operand in the option statement |
The KFPL31090-E message is displayed even when the cutdtmsg operand is not specified. |
dataerr operand in the option statement |
An input data error subject to rollback is detected. |
extdat statement |
The extended DAT-format input data files are also supported. |
error operand in the source statement |
The number of times the KFPL31090-W/E message is displayed changes as follows:
- When the lengover operand is specified:
- The message is displayed only for the first column in which an invalid data length is detected.
- When the lengover operand is omitted:
- The message is displayed for all columns in which an invalid data length is detected.
|
errdata operand in the source statement |
The applicable data is output to the error data file. |
Other control statements |
None |
Specifies suppression of a type of error data information. When data loading is performed in units of RDAREAs on a table whose partition storage condition have been changed, and the input data contains row data that does not satisfy the RDAREA storage conditions, the utility displays error data information. This operand is used to suppress output of this error data information.
This operand is applicable if the table subject to data loading is a row-partitioned table and data loading is performed in units of RDAREAs. If this operand is specified under any other conditions, it is ignored.
The table below shows the relationship between the divermsg operand and each item when the input data contains row data that does not satisfy the partition storage conditions.
Table 5-19 Relationship between the divermsg operand and each item when input data contains row data that does not satisfy the partition storage conditions
Item |
divermsg operand |
Specified |
Omitted |
pdload's return code |
0# |
4 |
Error information file |
KFPL31009-E message is not output. |
KFPL31009-E message is output. |
Error data file |
Error data is not output. |
Error data is output. |
-e option |
Disabled |
Enabled |
- #: In the event of a data format error, the return code is 4.
Specifies the endian that is used in the source input data file when a pdrorg-output binary file is used to transfer data between platforms that use different endians. The utility converts the input data file to the endian used at the target platform. If the source and target platforms use the same endian, there is no need to specify this operand.
- big: Input data file uses big endian.
- little: Input data file uses little endian.
The table below shows the operand specification when the source and target platforms use different endians.
Table 5-20 Operand specification when the source and target platforms use different endians
Source |
Target |
Big endian |
Little endian |
Big endian |
--#1 |
srcendian=big |
Little endian |
srcendian=little |
--#2 |
- #1
- There is no need to specify the operand because the source and target use the same endian. However, srcendian=big can be specified.
- #2
- There is no need to specify the operand because the source and target use the same endian. However, srcendian=little can be specified.
- Rules
- When data is transferred between platforms that use different endians, the utility uses the binary file output by pdrorg as the input file. Therefore, if you specify the srcendian operand, you must also specify the -W and -b options or the -U option. If you omit the -W and -b options as well as the -U option, the srcendian operand cannot be specified; if it is specified, the KFPL25002-E or KFPL24991-E message will be output and the statement will terminate with an error.
- If -b -W -w is specified, the statement ignores the endian specification and uses the endian settings in the input file.
- Notes
- If the specified srcendian operand is invalid, the stored data is not guaranteed and the following might result:
- Input data error
- Abnormal termination of program
- Invalid value stored in the database
Specifies that when an input data file in fixed-size data format is used for data loading, the character data to be stored in a numeric data type (INTEGER, SMALLINT, DECIMAL, FLOAT, or SMALLFLT) is to be converted to 0. The term character data refers to data specified in character format (type=char(n) or type=adec(n,m)) whose value consists of only spaces (0x20), tabs (0x09), and null characters (0x00).
This operand is applicable only to an input data file in fixed-size data format for which the -a option is specified. The operand is ignored if it is specified for an input data file in any other format.
The table below describes the allspace operand specification and the pdload processing when character data is stored in a numeric data type column.
Table 5-21 allspace operand specification and the pdload processing
allspace option |
pdload processing |
Omitted |
Detects a data conversion error, outputs the KFPL31004-E message to the error information file, and skips database storage processing. |
Specified |
Stores 0 in the corresponding column in the database. |
Specifies the handling of spaces in an XML document when data loading is performed from the XML document to an XML-type column.
- preserve: Preserve all spaces.
- strip: Handle spaces in the text node as follows (excluding a text node that has descended from an element with the xml:space="preserve" attribute):
- Remove spaces at the beginning and end of the text node.
- Replace any number of consecutive spaces with a single space.
(p) seq_range={buff|sequence-number-acquisition-unit|all}
Specifies the method for acquiring sequence numbers when the automatic numbering facility is used for data loading.
- buff
- Sequence numbers are to be acquired by the buffer unit acquisition method. This method acquires as many sequence numbers as the number of rows read into the input buffer and then performs data loading.
- You can change the size of the input buffer by specifying the file_buff_size operand in the option statement.
- For an input file in DAT or extended DAT format, sequence numbers are always acquired for all rows because the number of rows read into the input buffer is undefined. Therefore, if you load a large amount of data, specifying buff is not recommended.
- sequence-number-acquisition-unit ~<unsigned positive number>((1-2147483647))
- Sequence numbers are to be acquired by the specification unit acquisition method, and specifies the number of sequence numbers to be acquired in order to perform data loading.
- all
- Sequence numbers are to be acquired by the number batch acquisition method.
If cycle is not specified in the sequence generator cycle option for the sequence generator, a value outside the range of the sequence numbers cannot be acquired. Therefore, if all is specified, the statement terminates with an error if sequence numbers go beyond the acquisition range. If a value other than all is specified, the statement terminates with an error when a sequence number goes beyond the acquisition range, and acquisition of sequence number for the specified acquisition unit fails.
~<unsigned integer>((32-512000)) (KB)
Specifies the memory size for the input buffer to be used when data is loaded from the input data file to the input buffer. You can specify this operand when you have specified seq_range=buff.
If this operand is omitted, 1,024 is assumed.
(r) charset={utf-16be|utf-16le}
Specifies the endian for the input data file for when data is loaded from a UTF-16 input data file to a table defined in a UTF-8 environment. This operand is applicable when the input data file is in any of the following formats:
- Extended DAT format
- Fixed-size data format
- pdrorg-output binary format (pdload unload file in extended DAT format or fixed-size data format)
- utf-16be: Input data file uses bid endian.
- utf-16le: Input data file uses little endian.
If this operand is omitted, the utility assumes that the input data file uses the UTF-8 character encoding.
When this operand is specified, pdload converts the character encoding in order to store UTF-16 data in columns for UTF-8. The following table lists the data types that result in character encoding conversion:
No. |
Data type |
Relationship between character encoding conversion and specification of the charset operand |
Omitted |
Specified |
utf-16be |
utf-16le |
1 |
CHAR, VARCHAR |
No character set specified |
N |
YC |
YC |
2 |
EBCDIC character set |
YU |
N |
N |
3 |
UTF-16 character set |
YC |
N |
YU |
4 |
NCHAR, NVARCHAR |
N |
N |
N |
5 |
MCHAR, MVARCHAR |
N |
YC |
YC |
6 |
BINARY type |
N |
N |
N |
7 |
BLOB |
Data |
N |
N |
N |
8 |
File name# |
N |
YC |
YC |
9 |
Abstract data type |
CHAR, VARCHAR |
N |
YC |
YC |
10 |
NCHAR, NVARCHAR |
N |
N |
N |
11 |
MCHAR, MVARCHAR |
N |
YC |
YC |
12 |
BINARY type |
Data |
N |
N |
N |
13 |
File name# |
N |
YC |
YC |
14 |
BLOB |
Data |
N |
N |
N |
15 |
File name# |
N |
YC |
YC |
16 |
Other |
N |
N |
N |
17 |
Other |
N |
N |
N |
- Legend:
- YC: Character encoding is converted and data lengths change in conversion.
- YU: Character encoding is converted and data lengths remain unchanged after conversion.
- N: Character encoding is not converted.
- #
- Files names are converted to character encoding that can be processed by the OS regardless of the data types of the columns.
If the data lengths differ before and after conversion, pdload adjusts the data length. The table below shows the data types of columns and the data length adjustment method based on the data length resulting from conversion.
Table 5-22 Data types of columns and the data length adjustment method based on the data length resulting from conversion
Data type of column |
Data length after conversion |
Data length after conversion column definition length |
Data length after conversion > column definition length |
CHAR or MCHAR |
Data is left-aligned and padded with spaces. |
Data is left-aligned and any excess part is discarded#. |
VARCHAR or MVARCHAR |
Data is stored with the length obtained after conversion. |
- #
- If data truncation occurs, the subsequent processing depends on the lengover and cutdtmsg operand values specified in the option statement, as described in the table below:
lengover operand |
cutdtmsg operand |
Processing in the event of data truncation |
Omitted |
Omitted or off |
Processing continues as is. |
on |
A warning message is output to the error information file and then processing is resumed. |
err |
-- |
Processing terminates abnormally. |
- Legend: --: Not applicable
All Rights Reserved. Copyright (C) 2011, 2015, Hitachi, Ltd.