OpenTP1 Version 7 TP1/Client User's Guide TP1/Client/J

[Contents][Index][Back][Next]

2.11.7 Performance analysis trace

Performance analysis trace (PRF trace) information is the trace information of the Cosminexus Application Server. By collecting the performance analysis trace, you can determine the flow of the series of processes that occurred during UAP execution and the time that was taken, and collect the information necessary for analyzing performance. Furthermore, when an error occurs, you can determine how far the process had reached.

For an overview of performance analysis traces and details on how to use them, see the manuals Cosminexus Function Description and Cosminexus System Operation Guide. This section explains how to use performance analysis traces when you run TP1/Client/J on Cosminexus Application Server, the information collected in performance analysis traces, and trace collection points.

Organization of this subsection
(1) Outputting the performance analysis trace on Cosminexus Application Server
(2) Collating Cosminexus Application Server with OpenTP1 trace
(3) Information collected in the performance analysis trace
(4) Performance analysis trace collection point
(5) Performance analysis trace identification information that is output to a UAP trace

(1) Outputting the performance analysis trace on Cosminexus Application Server

When you run TP1/Client/J on Cosminexus Application Server, you can output the performance analysis trace on Cosminexus Application Server. The performance analysis trace is collected at the TP1/Client/J trace collection point. To collect the performance analysis trace, specify dccltprftrace=Y in the TP1/Client/J environment definition.

When outputting the performance analysis trace on Cosminexus Application Server, the information collected by TP1/Client/J increases the size of the performance analysis trace that is output. Therefore, you must take this increase into consideration when specifying a size for the performance analysis trace.

(2) Collating Cosminexus Application Server with OpenTP1 trace

When you issue to TP1/Server an RPC that uses the name service or an RPC that uses the scheduler direct facility, you can add to the RPC message to be sent to the scheduler identification information (an IP address, for example) that is essentially unique to each TP1/Client/J instance. The added information is output to the performance verification trace of OpenTP1. By collating this OpenTP1 performance verification trace with the performance analysis trace of Cosminexus Application Server, you can identify the series of operational flow between Cosminexus Application Server and OpenTP1. When you issue to a TP1/Server whose version is 07-02 or later an RPC that uses the remote API facility, you can add identification information (an IP address, for example) to the RPC message to be sent to the scheduler.

To add identification information to the OpenTP1 performance verification trace, specify dccltprfinfosend=Y in the TP1/Client/J environment definition. For details about the OpenTP1 performance verification trace, see the manual OpenTP1 Description.

(3) Information collected in the performance analysis trace

Information collected by TP1/Client/J is output as added information to the performance analysis trace. The information collected in the performance analysis trace and a trace output example are described below.

Information collected in the performance analysis trace
Node ID: _Jaa (4-byte alphanumeric character)
aa: Randomly chosen two alphanumeric characters (numbers (0-9) or letters (A-Z or a-z))
Root communication serial number: bbbbbbbb (4-byte hexadecimal number)
bbbbbbbb: IP address
RPC communication serial number: ccccdddd (4-byte hexadecimal number)
cccc: Randomly chosen 2-byte hexadecimal number
dddd: Communication serial number

Example of outputting performance analysis trace on Cosminexus Application Server
...    OPT                  ASCII
...    Hexadecimal display  _Jaa/0xbbbbbbbb/0xccccdddd...
 

Output example in which identification information is added to the performance verification trace of OpenTP1
[Figure]

(4) Performance analysis trace collection point

This subsection explains the details of the trace collection point for the performance analysis trace of TP1/Client/J for each type of processing.

(a) Establishing and terminating connection with the RAP-processing listener

The following table shows the event IDs, trace collection points, additional information, and PRF trace collection levels used when establishing or terminating connection with the RAP-processing listener.

Table 2-45 Details of the trace collection points when establishing or terminating connection with the RAP-processing listener

Event ID Number in the figure#1 Trace collection point Additional information#2 Level
0x9180 1 Before establishing connection to the RAP-processing listener Information on the request destination (host name, port number) A
0x9181 2 After establishing connection to the RAP-processing listener (collected only when an error occurs) Internal return code A
0x9182#3 3 After sending or receiving RAP-processing listener connection request data Internal return code A
0x9183 4 Before sending or receiving RAP-processing listener disconnection request data N/A A
0x9184 5 After sending or receiving RAP-processing listener disconnection request data Internal return code A

Legend:
A: Standard
N/A: Not applicable

#1
Corresponds to the number inside Figure 2-28.

#2
Information to be added to the performance analysis trace of Cosminexus Application Server

#3
A trace with this event ID is not collected if an error occurs in the connection establishment process and the event ID 0x9181 is collected.

The following figure shows the trace collection points in establishing or terminating connection with the RAP-processing listener:

Figure 2-33 Trace collection points in establishing or terminating connection with the RAP-processing listener

[Figure]

(b) API surrogate execution request by the remote API facility

The following table shows the event IDs, trace collection points, additional information, and PRF trace collection levels when an API surrogate execution request is issued by the remote API facility.

Table 2-46 Details of the trace collection points when an API surrogate execution request is issued by the remote API facility

Event ID Number in the figure#1 Trace collection point Additional information#2 Level
0x9190 1 Before processing a request for surrogate execution of dc_rpc_call N/A A
0x9191 2 After processing a request for surrogate execution of dc_rpc_call Internal return code A
0x9192 1 Before processing a request for surrogate execution of dc_trn_begin N/A A
0x9193 2 After processing a request for surrogate execution of dc_trn_begin Internal return code A
0x9194 1 Before processing a request for surrogate execution of dc_trn_chained_commit N/A A
0x9195 2 After processing a request for surrogate execution of dc_trn_chained_commit Internal return code A
0x9196 1 Before processing a request for surrogate execution of dc_trn_chained_rollback N/A A
0x9197 2 After processing a request for surrogate execution of dc_trn_chained_rollback Internal return code A
0x9198 1 Before processing a request for surrogate execution of dc_trn_unchained_commit N/A A
0x9199 2 After processing a request for surrogate execution of dc_trn_unchained_commit Internal return code A
0x919A 1 Before processing a request for surrogate execution of dc_trn_unchained_rollback N/A A
0x919B 2 After processing a request for surrogate execution of dc_trn_unchained_rollback Internal return code A

Legend:
A: Standard
N/A: Not applicable

#1
Corresponds to the number inside Figure 2-29.

#2
Information to be added to the performance analysis trace of Cosminexus Application Server

The following figure shows the trace collection points that are used when an API surrogate execution request is issued by the remote API facility:

Figure 2-34 Trace collection points that are used when an API surrogate execution request is issued by the remote API facility

[Figure]

(c) Issuing an RPC to the schedule server

The following table shows the event IDs, trace collection points, additional information, and PRF trace collection levels used when issuing an RPC to the schedule server:

Table 2-47 Details of the trace collection points when issuing an RPC to the schedule server

Event ID Number in the figure#1 Trace collection point Additional information#2 Level
0x91C0 1 Before establishing connection to the schedule server Information on the host at the service request destination (host name, port number) A
0x91C1 2 Before processing a service request to issue an RPC to the schedule server Internal return code A
0x91C2#3 3 After accepting a connection request to receive response from the SPP or schedule server Internal return code#4 A
0x91C3 4 Before establishing connection to the name server Information on the host at the service request destination (host name, port number) B
0x91C4 5 After executing a service information inquiry to the name server Internal return code B
0x91C5#3 6 After accepting a connection request to receive response data from the name server Internal return code#5 B

Legend:
A: Standard
B: Detail

#1
Corresponds to the number inside Figures 2-30 and 2-31.

#2
Information to be added to the performance analysis trace of Cosminexus Application Server

#3
A trace with this event ID is collected when the transmission of the requested data is normally terminated.

#4
When accepting of the connection request to receive response data from the SPP or schedule server is terminated normally, the connection source information (IP address and port number) is also added.

#5
When accepting of the connection request to receive response data from the name server is terminated normally, the connection source information (IP address and port number) is also added.

The following figure shows the trace collection points that are used when issuing an RPC to the schedule server:

Figure 2-35 Trace collection points that are used when issuing an RPC to the schedule server

[Figure]

The following figure shows the trace collection points that are used when issuing a service information inquiry to the name server:

Figure 2-36 Trace collection points that are used when issuing a service information inquiry to the name server

[Figure]

(d) API execution

The following table shows the event IDs, trace collection points, additional information, and PRF trace collection levels used during API execution.

Table 2-48 Details of the trace collection points during API execution

Event ID Trace collection point Additional information#1 Level
0x91D0 Entry point to the openConnection method N/A A
0x91D1 Exit point from the openConnection method Internal return code A
0x91D2 Entry point to the closeConnection method N/A A
0x91D3 Exit point from the closeConnection method Internal return code A
0x91D4 Entry point to the rpcCall method inlen A
0x91D5 Exit point from the rpcCall method outlen, Internal return code A
0x91D6 Entry point to the rpcCallTo method inlen A
0x91D7 Exit point from the rpcCallTo method outlen, Internal return code A
0x91D8 Entry point to the trnBegin method N/A A
0x91D9 Exit point from the trnBegin method Internal return code#2 A
0x91DA Entry point to the trnChainedCommit method N/A A
0x91DB Exit point from the trnChainedCommit method Internal return code#2 A
0x91DC Entry point to the trnChainedRollback method N/A A
0x91DD Exit point from the trnChainedRollback method Internal return code#2 A
0x91DE Entry point to the trnUnchainedCommit method N/A A
0x91DF Exit point from the trnUnchainedCommit method Internal return code A
0x91E0 Entry point to the trnUnchainedRollback method N/A A
0x91E1 Exit point from the trnUnchainedRollback method Internal return code A

Legend:
A: Standard
N/A: Not applicable

#1
Information to be added to the performance analysis trace of Cosminexus Application Server

#2
At the start of a transaction, the transaction global identifier and the transaction branch identifier are also added.

(5) Performance analysis trace identification information that is output to a UAP trace

TP1/Client/J outputs performance analysis trace identification information at the end of the UAP trace exit-point information in order to connect performance analysis trace information of TP1/Server and Cosminexus with the trace information of TP1/Client/J.

The following methods are provided for outputting performance analysis trace identification information to a UAP trace:

Note that the performance analysis trace identification information is output regardless of the value of the dccltprfinfosend operand in the TP1/Client/J environment definition.

An example of a UAP trace to which performance analysis trace identification information is output is shown below. Bold type indicates the performance analysis trace identification information.

 
2008/09/01 16:37:24.698  Location = Out  ThreadName = main
MethodName = TP1Client.rpcCall
Exception  =
ADDRESS   +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +a +b +c +d +e +f  0123456789abcdef
00000000  00 00 00 1c 53 75 7a 75 6b 69 00 00 00 00 00 00  ....Suzuki......
00000010  00 00 00 00 00 00 00 00 00 00 00 00 51 00 00 00  ............Q...
00000020  5f 4a 30 6d 2f 30 78 30 61 64 31 30 66 37 63 2f  _J0m/0x0ad10f7c/
00000030  30 78 32 62 35 64 30 30 30 31                    0x2b5d0001