9.5.4 Log monitoring function
Log trapper provides the function to monitor and your environment's logging and notice as a add-on program.
Log trapper uses Fluentd in OSS to convert the information that is output to the monitored log file and the information that is output to Windows event log to JP1 events. The converted JP1 events are registered in JP1/Base of the Integrated manager host and can be viewed in JP1/IM - View and integrated operation viewer.
- Organization of this subsection
(1) Function Overview
As one of add-on program in JP1/IM - Agent, log trapper provides the function to convert log messages written to a text-formatted log file and Windows event log events into JP1 events. Log trapper uses Fluentd of OSS.
The following table lists the functions provided by log trapper. For more information, see "(2) Input plug-in functions (Input Plugins)", "(3) Text-format log file monitoring facility (tail plug-in) ", and "(4) Monitoring function of Windows event log (windows_eventlog2 plug-in) ". Text in parentheses is the name of the function provided by Fluentd as a OSS.
Function |
Description |
|
---|---|---|
Input plug-in functions (Input Plugins) |
Reads and parses log messages and Windows event log events output to a text-format log file. |
|
Text-format logfile monitoring (tail) |
Read and parse log messages. |
|
Windows Event Log Monitoring Function (windows_eventlog2) |
Reads and analyzes events in Windows event log. |
|
Event conversion function (Filter Plugins) |
Specify the log criteria to be monitored and the properties when converted to JP1 events. |
|
Editing Log Data (record_transformer) |
Specify the properties and settings to be configured when log monitoring is converted to JP1 events. |
|
Log Data Extractor (grep) |
Specify the conditions for logs to be monitored. |
|
Output Plug-in Functions (Output Plugins) |
Outputs the log monitoring results. The output methods include converting to JP1 events and outputting to Fluentd's own log file. |
|
HTTP POST request function (http) |
Notifies JP1/IM - Manager and converts result of log monitoring to JP1 events. |
|
Multi-output function (copy) |
The log monitoring results are output by multiple output plug-in functions. |
|
Stdout function (stdout) |
Prints the log monitoring results to Fluentd stdout. The information output to stdout is output to Fluentd log file. |
|
Log output function |
Prints Fluentd operation information to Fluentd logfile. |
|
Metric output function |
Sends a metric to JP1/IM - Manager indicating that log is being monitored. |
When Input plug-in function detects a log, the event convert function and output plug-in function operate in sequence, and the log information that you want to monitor is converted to JP1 events. The settings for each function are made in the definition file.
Input plug-in function settings are configured in [Input Settings] section of the monitor definition file. The monitor definition file includes "text-formatted log file monitoring definition file", which monitors log files in text format, and "Windows event-log monitoring definition file", which monitors event logs. For the specifications of each definition files, see the appropriate file in the manual "JP1/Integrated Management 3 - Manager Command, Definition File, and API Reference" (2. Definition File). "Text-formatted log file monitoring definition file" is created for each set of wrapped log files (or for each log file, if it is not a wrapped log file).
The event conversion function is also set in the monitor definition file. The settings of the log data editing function are made in [Attributes Settings] section, you can specify the properties and settings to be output when converting to JP1 events. The log data extraction function is set in [Inclusion Settings] section and [Exclusion Settings] section, and the conditions of the log to be monitored are specified.
Output plug-in settings are configured in log monitoring common definition file. HTTP POST request function is set in [Output Settings] section and specify the notification destination and retry interval for make it a JP1 event. For log monitoring common definition file, see "log monitoring common definition file (jpc_fluentd_common.conf)" (2. Definition File) in "JP1/Integrated Management 3 - Manager Command, Definition File, and API Reference" manual.
(2) Input plug-in functions (Input Plugins)
Input plug-in functions reads log messages and event logs.
The following log reads are available for input plug-ins:
-
tail
Reads the log output to the log file of the application program.
-
Windows_eventlog2 (Windows only)
Reads the log that is output to Windows event log.
(3) Text-format log file monitoring facility (tail plug-in)
Tail is an input plug-in that monitors log files in text format.
By default, tail monitoring starts from the logging added after startup on the first boot. When the log is rotated, it resumes reading the new file from the beginning. If conf file specifies the path of the monitored log file in tail's path, tail plug-in reads the updated log as soon as the log file is updated.
(a) Log file path settings
Tail plug-in reads all files specified by an absolute path, regardless of the file content. To read log messages from a text-format log file, you must specify the absolute path of the log file in the definition file. For Windows, you cannot specify directories and files on network drives.
You can specify wildcards in the path string. The wildcard that can be used is "* (any 0 or more character string)". You can also specify multiple paths. To specify multiple paths, separate them with ",".
If the path of a file specified using a wildcard contains an unintended file, such as a binary format, the string in the file is read. This causes unintended Fluentd logging or JP1 of unintended messaging to occur. To avoid this problem, wildcards should be used in conjunction with strings to ensure that they correctly match the names of the monitored log files.
The following shows an example configuration.
-
When monitoring all log files stored in /path/to/a directory and /path/to/a2 directory
path /path/to/a/*, /path/to/a2/*
-
When monitoring all log files in the third tier from the point of view of path, such as those under /path/to1/a directory and those under /path/to2/a2 directory (in this case, the log files stored under /path/to1 directory and /path/to2 directory are not monitored)
path /path/*/*
-
To monitor the wrapped log files (funcAlog1.log, funcAlog2.log, funcAlog3.log,...) stored in /path/to/b directory.
path /path/to/b/funcAlog*.log
-
When two or more log files are output to the same directory (/path/to/c)
-
fluentd_upB_tail.conf
path /path/to/c/funcBlog*
-
fluentd_upC_tail.conf
path /path/to/c/funcClog*
-
(b) Log files that can be monitored
You can monitor files that continue to be added to one log file, or files that are written by creating a new log file with a different file name when the log file reaches a certain capacity.
The following log file formats can be monitored:
-
Sequential files (SEQ)
A file that continues to be written to one log file, or a file that creates and writes a new log file with a different file name when the log file reaches a certain size.
-
Sequential file (SEQ2)
-
For Windows
After the file name is changed in the same volume, a file with the same name as the previous file name is created and a new log is written.
-
For Linux
This file changes the file name or deletes the file, creates a file with the same name as the file name before the change or deletion, and writes a new log.
-
-
Sequential file (SEQ3)
-
Windows only
This file is used to delete a file, create a file with the same name as the file name before deletion, and write a new log.
-
-
Wrap Around File (WRAP2)
When a log file is wrapped around after reaching a certain size, it is a file in which data is deleted and data is written from the beginning again.
-
UPD Types of Log Files (UPD)
This function is used to monitor a log file in which an undefined character string, such as a date, is set in the file name. An undefined string part is specified with a wildcard.
(c) Log files that cannot be monitored
When the log file reaches a certain size, it cannot be monitored for a format that wraps around and overwrites the data again from the beginning.
(d) Character code of the log file that can be monitored
Character codes of log files that can be monitored are shown below.
-
UTF-8 (default) (Japanese / English)
-
UTF-16LE (Japanese / English)
-
UTF-16BE (Japanese / English)
-
Shift_JIS (Japanese)
-
Windows-31J (Japanese-language)
-
C (English)
-
GB18030 (Chinese)
When monitoring UTF-8 and C log files, from_encoding and encoding are not specified. If you want to monitor logfiles other than UTF-8,C, you must specify a from_encoding and encoding. For encoding, specify UTF-8, and for from_encoding, specify the character encoding of the log file to be monitored. When an incorrect character code is specified, the intended character string cannot be extracted, or the content of garbled characters cannot be converted with the intended character code and is notified as a JP1 event..
(e) Recording the latest file read position
Tail records the last location read from the monitored file in pos_file. You must create pos_file for each monitor definition. Pos_file is created the first time tail is started. With JP1/IM - Agent defaults, the first time you read a log, tail starts reading from the log that was added after the log file was started, and creates pos_file to record the most recent read location. During startup, it reads the updated log and records the latest read position in pos_file.
On the second and subsequent invocations, tail reads the updated portion from the most recent read location recorded in pos_file.
If the contents of pos_file are corrupted while Fluentd is stopped, or if pos_file is erased, it is read from the end of the log file and updated (created) pos_file to record the latest read position in the same way as when the log file is started for the next time. In this case, the log added to the log file to be monitored during the stop is not monitored.
(f) Reading existing logs at initial startup
With JP1/IM - Agent defaults, tail starts reading from the end of the log file for the first log read. To read the logs that were already added at first startup, you must change the parameter read_from_head in the monitor definition file to true (defaults to false).
(g) Log parsing function (parse plug-in)
Purse plug-ins analyze the loaded logs and log them in the specified format.
The following log formats (type) can be specified with parse plug-in.
Type |
Description |
---|---|
None (defaults) |
Reads a single line of log without parsing or structuring. |
regexp |
Reads one line of log data that matches the pattern specified in the regular expression. |
multiline |
Reads multiple lines of log that match the pattern specified in the regular expression. |
syslog |
Reads the log output by syslog. |
csv |
Reads logs in CSV format (comma-separated values). |
tsv |
Reads logs in TSV format (tab-delimited). |
ltsv |
Reads logs in LTSV format (labeled tab-delimited). |
-
none
By default, newly added log one line is all read.
-
regexp
By specifying regexp for type, it is possible to parse and cut the log according to the format specified in the regular expression. Lines of the log that did not match the pattern specified in the regular expression are not read and a warning message is printed in Fluentd log. Refer to Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual for information about the warning messages that are output.
When you cut out a log by the name of "time", you can use timezone to specify the time zone when the log was output. For more information, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
-
multiline
Multiline parse plug-in reads a multi-line log and parses the log. In [Input Settings] in text-formatted log file monitoring definition file, if the parameters "format_firstline" and "formatN" are set in regular expressions, you can use the regular expression named capture function to structure multi-line logging. For more information about the parameters to set, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. dDefinition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
The following shows a sample configuration for reading a stack trace log for a Java.
-
Definition file
<parse> @type multiline format_firstline /\d{4}-\d{1,2}-\d{1,2}/ format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}) \[(?<thread>.*)\] (?<level>[^\s]+)(?<message>.*)/ </parse>
-
Log to be added to the monitored log file
2013-3-03 14:27:33 [main] INFO Main - Start 2013-3-03 14:27:33 [main] ERROR Main - Exception javax.management.RuntimeErrorException: null at Main.main(Main.java:16) ~[bin/:na] 2013-3-03 14:27:33 [main] INFO Main - End
-
Result of parsing the log
time: 2013-03-03 14:27:33 +0900 record: { "thread" :"main", "level" :"INFO", "message":" Main - Start" } time: 2013-03-03 14:27:33 +0900 record: { "thread" :"main", "level" :"ERROR", "message":" Main - Exception\njavax.management.RuntimeErrorException: null\n at Main.main(Main.java:16) ~[bin/:na]" } time: 2013-03-03 14:27:33 +0900 record: { "thread" :"main", "level" :"INFO", "message":" Main - End" }
The following is a sample configuration for reading Rails logging.
-
Definition file
<parse> @type multiline format_firstline /^Started/ format1 /Started (?<method>[^ ]+) "(?<path>[^"]+)" for (?<host>[^ ]+) at (?<time>[^ ]+ [^ ]+ [^ ]+)\n/ format2 /Processing by (?<controller>[^\u0023]+)\u0023(?<controller_method>[^ ]+) as (?<format>[^ ]+?)\n/ format3 /( Parameters: (?<parameters>[^ ]+)\n)?/ format4 / Rendered (?<template>[^ ]+) within (?<layout>.+) \([\d\.]+ms\)\n/ format5 /Completed (?<code>[^ ]+) [^ ]+ in (?<runtime>[\d\.]+)ms \(Views: (?<view_runtime>[\d\.]+)ms \| ActiveRecord: (?<ar_runtime>[\d\.]+)ms\)/ </parse>
-
Log to be added to the monitored log file
Started GET "/users/123/" for 127.0.0.1 at 2013-06-14 12:00:11 +0900 Processing by UsersController#show as HTML Parameters: {"user_id"=>"123"} Rendered users/show.html.erb within layouts/application (0.3ms) Completed 200 OK in 4ms (Views: 3.2ms | ActiveRecord: 0.0ms
-
Result of parsing the log
time: 1371178811 (2013-06-14 12:00:11 +0900) record: { "method" :"GET", "path" :"/users/123/", "host" :"127.0.0.1", "controller" :"UsersController", "controller_method":"show", "format" :"HTML", "parameters" :"{ \"user_id\":\"123\"}", ... }
-
-
syslog
Syslog parse plug-in analyzes the logging output by syslog. For more information about the parameters to set, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
The following shows a sample setting for reading RFC-3164 formatted logs.
-
Definition file
<parse> @type syslog time_format %b %d %H:%M:%S message_format rfc3164 with_priority false parser_type string support_colonless_ident true </parse>
-
Log to be added to the monitored log file
<6>Feb 28 12:00:00 192.168.0.1 fluentd[11111]: [error] Syslog test
-
Result of parsing the log
time: 1362020400 (Feb 28 12:00:00) record: { "pri" : 6, "host" : "192.168.0.1", "ident" : "fluentd", "pid" : "11111", "message": "[error] Syslog test" }
The following shows a sample setting for reading RFC-5424 formatted logs.
-
Definition file
<parse> @type syslog time_format %b %d %H:%M:%S rfc5424_time_format %Y-%m-%dT%H:%M:%S.%L%z message_format rfc5424 with_priority false parser_type string support_colonless_ident true </parse>
-
Log to be added to the monitored log file
<16>1 2013-02-28T12:00:00.003Z 192.168.0.1 fluentd 11111 ID24224 [exampleSDID@20224 iut="3" eventSource="Application" eventID="11211"] Hi, from Fluentd!
-
Result of parsing the log
time: 1362052800 (2013-02-28T12:00:00.003Z) record: { "pri" : 16, "host" : "192.168.0.1", "ident" : "fluentd", "pid" : "11111", "msgid" : "ID24224", "extradata": "[exampleSDID@20224 iut=\"3\" eventSource=\"Application\" eventID=\"11211\"]", "message" : "Hi, from Fluentd!" }
-
-
csv
Csv purse plug-in parses logging in csv format. For more information about the parameters to set, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
The following is a sample configuration for reading csv format logging.
-
Definition file
<parse> @type csv keys time,host,req_id,user time_key time </parse>
-
Log to be added to the monitored log file
2013/02/28 12:00:00,192.168.0.1,111,-
-
Result of parsing the log
time: 1362020400 (2013/02/28/ 12:00:00) record: { "host" : "192.168.0.1", "req_id" : "111", "user" : "-" }
-
-
tsv
Tsv purse plug-in parses logging in tsv format. For more information about the parameters to set, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)"(2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
The following is a sample configuration for reading tsv format logging.
-
Definition file
<parse> @type tsv keys time,host,req_id,user time_key time </parse>
-
Log to be added to the monitored log file
2013/02/28 12:00:00\t192.168.0.1\t111\t-
-
Result of parsing the log
time: 1362020400 (2013/02/28/ 12:00:00) record: { "host" : "192.168.0.1", "req_id" : "111", "user" : "-" }
-
-
ltsv
Ltsv purse plug-in parses logging in ltsv format. For more information about the parameters to set, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
The following is a sample configuration for reading ltsv format logging.
-
Definition file
<parse> @type ltsv keys time,host,req_id,user time_key time </parse>
-
Log to be added to the monitored log file
time:2013/02/28 12:00:00\thost:192.168.0.1\treq_id:111\tuser:-
-
Result of parsing the log
time: 1362020400 (2013/02/28/ 12:00:00) record: { "host" : "192.168.0.1", "req_id" : "111", "user" : "-" }
The following is a sample configuration for using the delimiter_pattern.
-
Definition file
<parse> @type ltsv delimiter_pattern /\s+/ label_delimiter = </parse>
-
Log to be added to the monitored log file
timestamp=1362020400 host=192.168.0.1 req_id=111 user=-
-
Result of parsing the log
record: { "timestamp": "1362020400", "host" : "192.168.0.1", "req_id" : "111", "user" : "-" }
-
(4) Monitoring function of Windows event log (windows_eventlog2 plug-in)
The windows_eventlog2 is an input plug-in that reads events from Windows event log.
(a) Types of logs that can be monitored
The log type is the name of the log displayed in [Event Viewer] of Windows. If you specify a log type in channel parameter of conf file, you can monitor the log for that type. The windows_eventlog2 can read all channels except debug logs and analytics logs.
The following is an example of the types of logs that can be specified:
-
Windows event log
-
application
-
system
-
setup
-
security
-
-
Application and Service Logs
HardwareEvents, etc.
Use the following procedure to check the types of logs that can be monitored.
-
Execute wevtutil command at the command prompt to check the list of logging types registered in the system.
The following is an example of entering a command:
>wevtutil el
-
Confirm the valid/invalid setting and type of "Log type" confirmed in step 1 for each "Log type".
The following is an example of entering a command:
>wevtutil gl Application name: Application enabled: true type: Admin :
You can specify channel parameters only if all of the following are true:
-
Enabled is "true"
-
Type is "Admin" or "Operational"
-
(b) Log files that can be monitored
You can monitor Windows event log for supported language environments. For details about the supported language environments, see 9.3.2 Language settings.
(c) Recording the most recent read position of the event log (storage plug-in)
The windows_eventlog2 allows you to use storage plug-in to log the most recent read position of the log to local storage in JSON format.
(d) Notes
If an event is not added to the monitored Windows event log between the first start of Fluentd and the time it is stopped, the Windows event log added during the stop until the next start cannot be monitored. In this case, a warning message will be output at the next start-up. An example of the warning message that is output is as follows:
[warn]: #0 This stored bookmark is incomplete for using. Referring `read_existing_events` parameter to subscribe: <BookmarkList> </BookmarkList>, channel: (Name of the channel)
If no event logs are added and Fluentd is stopped again, the event log added during the stop cannot be monitored, and a similar warning message will be printed the next time it is started.
Similarly, if you add a new Windows event log monitoring definition to Fluentd or change the contents of the monitoring definition file to start monitoring channels that do not have the latest read location saved, if the event log was not added to the monitored Windows event log before the stop, The Windows event log added during the stop until the next start is not monitored, and a similar warning message is output.
If a new event log is added during the next boot, the event log will be monitored and subsequent event monitoring will work without any problems.
(5) Event converter (Filter Plugins)
The event converter can edit the loaded log and extract specific fields. The event converters are as follows:
-
record_transformer
Edit the loaded log.
-
grep
Extracts a specific log from the loaded log.
(6) Log data editing function (record transformer plug-in)
The record_transformer edits the log data read from the log file.
Edit the log data read from the log file and edit the label of the metric to be remotely written. Metric label can be specified in the [Metric Settings] section, and JP1 event attribute can be specified in the [Attributes Settings] section. For the default values, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
Users can edit the parameters of the record_transformer in the [Metric Settings] and [Attributes Settings] sections of the definition file.
The [Metric Settings] section specifies the labeling of metric. The user can specify the following parameters:
-
Specifying a Host name
In the <record> directive, specify the hostname to be monitored in the jp1_pc_fluentd_hostname field.
-
Specifying Label names for IM management node
Set the field (jp1_pc_nodelabel) in the <record> directive to a character other than the control character that integrated operation viewer displays in IM management node labeling.
-
Specifying category ID
In the <record> directive, specify the category ID for IM management node of the logged SID in the jp1_pc_category.
-
Specifying Log monitor names
In the field (jp1_pc_logtrap_defname) in the <record> directive, specify the log monitor named to which you have copied the template's monitor definition file.
In the [Attributes Settings] section, the fields in the log can be edited according to JP1/IM-Manager's API request format in order to send the log data to JP1/IM-Manager and convert it to JP1 events.
The user can specify the following parameters:
-
Specifying JP1 events ID
The field (ID) in the <record> directive specifies B.ID attribute of JP1 event.
-
Specifying a host name
In the <record> directive, specify SOURCE_HOST to be set for the monitored hostname.
-
Specifying Severity
The field (SEVERITY) in the <record> directive specifies E.SEVERITY attribute of JP1 event.
-
Specifying label names for IM management node
Set the field (JPC_NODELABEL) in the <record> directive to a character other than the control character that integrated operation viewer displays in IM management node labeling.
-
Adding any extended attributes
You can also add any extended attributes. You can set NEW_FIELD (attribute name) and NEW_VALUE (attribute value) in the <record> directive to send any attribute and value to IM - Manager.
(7) Log data extractor (grep plug-in)
Grep extracts log data from the loaded log that matches the specified pattern. You can select the logs to be sent to JP1/IM - Manager and converted into JP1 events.
(a) Extraction condition specification
Specify the extraction condition in a regular expression. Within the <regexp> directive, you can set the criteria for a regular expression by specifying two parameters:
-
key
Name that identifies the pattern of the regular expression to be extracted
-
pattern
Regular expression to extract
You can create multiple extraction conditions and extract data by logical OR and logical AND as shown below.
-
Specifying logical AND (AND) criteria
If you want to extract logs that satisfy all of the regular expression patterns, place the <regexp> directive inside the <and> directive. Key of the <regexp> directive inside the <and> directive must differ. If key is duplicated, an error is issued at Fluentd startup and the system does not start.
-
Specifying logical OR (OR) criteria
If you want to extract logs that meet one of several regular expression patterns, place the <regexp> directive inside the <or> directive. Key of the <regexp> directive inside the <or> directive must differ. If key is duplicated, an error is issued at Fluentd startup and the system does not start.
(b) Specifying exclusion conditions
Can be specified in a conditional regular expression to be excluded. You can set the criteria for a regular expression by specifying the following two parameters in the <exclude> directive:
-
key
Name that identifies the pattern of the regular expression to be excluded
-
pattern
Regular expressions to exclude
You can create more than one exclusion conditions and exclude it by using the following logical OR and logical AND.
-
Specifying logical AND (AND) criteria
If you want to exclude logging that satisfies all of the regular expression patterns, place the <exclude> directive inside the <and> directive. Key of the <exclude> directive inside the <and> directive must differ. If key is duplicated, an error is issued at Fluentd startup and the system does not start.
-
Specifying logical OR (OR) criteria
If you want to exclude logging that satisfies one of several regular expression patterns, place the <exclude> directive inside the <or> directive. Key of the <exclude> directive inside the <or> directive must differ. If key is duplicated, an error is issued at Fluentd startup and the system does not start.
(c) Setting example
You can set the log extraction conditions or exclusion conditions using the directives enclosed in <> (DIRECTIVE in the following configuration rules).
"::=" indicates that the left-hand side is defined as the right-hand side. "|" indicates one of the items before or after this symbol.
CR ::= newline KEY ::= key text PATTERN ::= pattern /regular expressions/ DIRECTIVE ::= REGEXP | EXCLUDE | AND | OR REGEXP ::= <regexp> KEY CR PATTERN</regexp> EXCLUDE ::= <exclude> KEY CR PATTERN </exclude> EXPRESSION ::= REGEXP | EXCLUDE EXPRESSIONS ::= EXPRESSION EXPRESSION | EXPRESSIONS EXPRESSION AND ::= <and> EXPRESSIONS </and> OR ::= <or> EXPRESSIONS </or> |
The sum of the <regexp> or <exclude> directives that can be specified in a AND, OR for a log monitor definition is 256.
The following shows an example configuration.
-
When extracting logs that contain the string "Error" in MESSAGE field.
@typeΔgrep <regexp> key MESSAGE pattern /Error / </regexp>
-
When excluding logs that contain the string "Info" in MESSAGE field.
@typeΔgrep <exclude> key MESSAGE pattern /Info/ </exclude>
-
When extracting logs where VALUE field is numeric and RECORD_NAME starts with cpu_.
@typeΔgrep <and> <regexp> key VALUE pattern /[1-9]\d*/ </regexp> <regexp> key RECORD_NAME pattern /^cpu_/ </regexp> </and>
-
When excluding logs where STATUS_CODE field is a three-digit number starting with 5 or the end of URL is.css
@typeΔgrep <or> <exclude> key STATUS_CODE pattern /^5\d\d$/ </exclude> <exclude> key URL pattern /\.css$/ </exclude> </or>
(8) Output Plug-in function (Output Plugins)
The output plug-in outputs the log monitoring results. Output plugins have the following functions:
-
http
Performs a JP1 event converter API of JP1/IM - Manager.
-
stdout
Outputs to Fluentd's own log file.
-
copy
A plug-in that allows you to use more than one of the above output methods.
-
rewrite_tag_filter
A plug-in that specifies the criteria for issuing JP1 events.
(9) HTTP POST request function (http plug-in)
Http plug-in uses POST method to forward metric and log data in a JSON format.
(a) Sending a metric
You can POST metric by specifying URL of the trend data write API in endpoint. Metric that you POST can be viewed as trend data from integrated operation viewer. You can also POST metric to create an IM management node.
You must specify the trend data write API.
The initial values are as follows.
endpoint http:// integrated agent host name:port/ima/api/v1/proxy/promscale/write
(b) Sending log data
You can POST log data by specifying URL of JP1 event converter API in endpoint. POST log data can be viewed as a JP1 event from integrated operation viewer.
You must specify JP1 event converter API.
The initial values are as follows.
endpoint http:// integrated agent host name:port/ima/api/v1/proxy/imdd/im/api/v1/events/transform
(c) Buffer function for log data to be sent (buffer)
Fluentd uses the buffering function when using output plug-ins. The buffer is where the acquired log data is stored. By using the buffer function, you can adjust the frequency of data transmission and temporarily store data that failed to be transmitted. The http plug-in accumulates the received log data in a buffer. The accumulated log data is POST to JP1/IM agent management base at regular intervals.
-
Buffer file path setting
The user must specify the absolute path of the directory containing the buffer. Path parameter in the <buffer> directive in the definition file enables you to specify the file that caches the buffer.
-
Setting the transfer interval for accumulated log data
If you set the flush_interval, you can specify how often the accumulated log data is transferred.
-
Configuring the behavior of output plug-in when buffer cue is full
If you set the overflow_action, you can specify from the throw_exception, block, drop_oldest_chunk three actions when the total size of the buffers reaches the size specified in the total_limit_size. For more information about each behavior, see Text-formatted log file monitoring definition file (fluentd_@@trapname@@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
-
Setting the retry interval
Fluentd accesses JP1/IM agent control base and JP1/IM agent management base to connect to JP1/IM-Manager. If JP1/IM agent control base or JP1/IM agent management base, or JP1/IM - Manager is down, you may not be able to connect to JP1/IM - Manager.
If you specify a retry_wait, you can specify how often to retry if a connection fails due to a temporary communication failure.
(10) Stdout function (stdout plug-in)
Stdout plug-in outputs log data and metric to stdout after editing and extracting data using the event converter.
(a) Logging formatting function (format)
You can format the log and output it to a log file.
The output is as follows.
Time delimiter tag delimiter record newline
-
Delimiter
The delimiter is \t (tab).
-
Line feed
Line feeds are LF (except for Windows) or CRLF (for Windows).
-
time
The date and time formats are output in the following format:
%Y-%m-%d %H:%M:%S %z
For example, "2022-12-31 12:34:00 +900" is output when 12:34:0 on December 31, 2022 (Japan time).
When the date and time in the log message are logged with the name "time" using the " (3) Text-format log file monitoring facility (tail plug-in) " (g) Log parsing function (parse plug-in) , the date and time are displayed. If "time" is not logged, the time of the hosted OS is set when Fluentd monitors that log message.
When the time of the host OS is changed, "time" time is set to the new time. If the time zone is changed while Fluentd is stopped, the new time zone is output.
-
tag
Prints tag specified in the <source> directive.
For the log that issued JP1 event, the message "Value of the specified tag.jp1event" is displayed. For the log that does not issue JP1 event, the message "tag value.outputlog" is displayed. To specify whether to issue JP1 events or to output log data to a log file without issuing JP1 events, use (12) Event forwarding function (rewrite_tag_filter plug-in) .
-
record
The log data that was analyzed in the [Input Settings] section and edited in the [Attributes Settings] section is output in the following JSON format.
{ "eventide" : "Event ID", "message" : "Message", "xsystem" : true, "attrs" : {"Extended attribute name" : "Value",...} }
The actual output is done without line breaks.
(11) Multiple output functions (copy plug-Ins)
Copy plug-in allows log data to be exported by more than one plug-in.
Use [Output Settings] to specify the output plug-in whose log monitoring results are to be output. The output plug-in specifies logging observations in the arguments of the <match> directive and defines settings for the output plug-in in the <match> directive.
JP1/IM - Agent uses the output plug-ins described in (9) HTTP POST request function (http plug-in) and (10) Stdout function (stdout plug-in), but the output plug-in definition must be single within the <match> directive. Therefore, by placing copy plug-in in the <match> directive and defining other output plug-ins within copy plug-in, you can have several plug-ins output logging monitoring.
(a) Copy-destination specification function (store)
If you specified a copy plug-in in [Output Settings], specify the output plug-in (destination) to be used in the <store> directive. The <store> directive can be set more than once, allowing more than one plug-in to print the same log data.
The <store> directive is similar to what you specify in the <match> directive when you specify a single output plug-in.
(12) Event forwarding function (rewrite_tag_filter plug-In)
You can specify the log data to be forwarded to JP1/IM-Manager as a JP1 event by specifying the attribute value of JP1 event that was edited in " (6) Log data editing function (record transformer plug-in)". This setting is specified in [Forward Settings] in the monitor definition file. You can specify the properties of JP1 events in the parameter key in the <rule> directive and the regular expressions for the logs that publish JP1 events to pattern.
The log data that matches the regular expression in the log is POST to JP1/IM-Manager as a JP1 event. The log data is also output to the standard output by "(10) Stdout function (stdout plug-in)". The non-matching log data is not POST to JP1/IM - Manager and is output to stdout.
For more information, see the [Forward Settings] section of Text-formatted log file monitoring definition file (fluentd_@@trapname@_tail.conf.template)" (2. Definition file) in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.
(13) Log output function
Fluentd has a log output function. The log is output to the log file by WinSW(for Windows) and rotatelogs(for non Windows). For details on the settings other than the log level, see "service definition file (jpc_program_service.xml)" and "unit definition file (jpc_program name.service)" in JP1/Integrated Management 3 - Manager Command, Definition File and API Reference manual (2. Definition File).
For details about Fluentd log files, see "12.2.2 (5) Fluentd Log" in JP1/Integrated Management 3 - Manager Operation Manual.
(a) Specifying the log level
In the <system> section, you can specify the type of log by setting log_level. The log type applies to the entire definition file. The types of logs that can be specified are listed in descending order of severity.
-
fatal
-
error
-
warn
-
info
-
debug
-
trace
The default log level is info, which outputs info, warn, error, and fatal logs with a severity greater than or equal to info.
(14) Metric send function
For each monitor definition file, a metric is sent to JP1/IM - Manager every 60 seconds to indicate that log monitoring is running. Sending is performed by using the function of " (9) HTTP POST request function (http plug-in)".
By saving metric to JP1/IM - Manager, you can use the trend view feature to verify that logging is working. In addition, product plugin's IM management node creation feature creates an IM management node by referencing metric.
Metric to be sent is shown below. The values written in bold are the values set by the user in the [Metric Settings] section of the monitor definition file.
fluentd_logtrap_running{jp1_pc_fluentd_hostname="hostname", jp1_pc_nodelabel="IM management node label name",jp1_pc_category="category ID", jp1_pc_logtrap_defname="log monitor name",jp1_pc_trendname="fluentd"} 1
Sample is always 1. If sample is saved, it indicates that the logging monitoring for the logging monitoring name, which is labeled jp1_pc_logtrap_defname, is occurring at that time. If sample does not exist, it means that no logging monitoring is taking place at that time.
(15) Multi-process start function
Fluentd starts with one supervisor process and one worker by default. Using the multi-process start function, Fluentd's supervisor process starts multiple workers and uses a separate process for each worker. The definition file settings required to use this feature are as follows:
-
workers parameters
-
<worker>directive
-
<worker N-M>directive
These settings are implemented in the Log Monitoring Common Definition File (jpc_fluentd_common.conf), the Text Log File Monitoring Definition File, and the Windows Event Log Monitoring Definition File. For information about the settings for each definition file, see the appropriate file in the manual JP1/Integrated Management 3 - Manager, Command Definition File and API Reference (2. Definition File).
(16) Notes
-
If you want to monitor the logging of Fluentd itself, you must define the monitoring so that the operation does not loop.
(e.g.) If there is an error in the definition file that monitors the log file of the application, and the following error occurs in the log file of Fluentd, a JP1 event is issued to notify the user.
2022-11-08 20:50:05 +0900 [error]: #0 invalid line found...
Configures the monitor definition file to monitor the above "invalid line found" text in Fluentd log file and to issue JP1 events. For example:
: ## [Input Settings] <source> tag tail.fluentdlog path /opt/jp1ima/logs/fluentd/jpc_fluentd_service* : <parse> @type regexp expression ^((?<MESSAGE>.*))$ : </parse> </source> : ## [Inclusion Settings] <filter tail.fluentdlog> @type grep <regexp> key MESSAGE pattern /.*invalid line found.*/ </regexp> </filter> ...
When a JP1 event is issued in the above monitor, the following "invalid line found" text is output in the log.
2022-11-08 20:51:05 +0900 tail.fluentdlog.jp1event {...,"message":"2022-11-08 20:50:05 +0900 [error]: #0 invalid line found...",...}
With the above logging and configuration, JP1 event-publishing is repeated as follows:
-
Fluentd fails.
Logging of "invalid line found" is output.
-
In the monitor definition file setting, a JP1 event is issued at the time of step 1.
Logging of "tail.fluentdlog.jp1event" including "invalid line found" text is output.
-
In the monitor definition file setting, a JP1 event is issued in response to step 2.
Logging of "tail.fluentdlog.jp1event" including "invalid line found" text is output.
-
After that, JP1 event is issued repeatedly at the time of the log output by JP1 event issuance.
In the settings of the definition file that monitors Fluentd log files, avoid this by setting not to monitor the log to be output. In the above cases, the log monitor is excluded by setting [Exclusion Settings] so that fluentdlog logs are not monitored.
: ## [Inclusion Settings] <filter tail.fluentdlog> @type grep <regexp> key MESSAGE pattern /.*invalid line found.*/ </regexp> </filter> ## [Exclusion Settings] <filter tail.fluentdlog> @type grep <exclude> key MESSAGE pattern /^[^ ]*\s[^ ]*\s[^ ]*\s+tail\.fluentdlog\.jp1event.*/ </exclude> </filter> :
-
-
Your own gem is not supported. If you add your own gem to Fluentd provided by JP1/IM - Agent, Fluentd might not work as expected, so do not install your own gem.
(17) Communication function
(a) Communication protocols and authentication methods
The following shows the communication protocols used by integrated agent and authentication methods.
Connection source |
Connection target |
Protocol |
Authentication method |
---|---|---|---|
Fluentd |
JP1/IM agent control base |
HTTP |
No authentication |
- #
-
ICMPv6 cannot be used.
(b) Network Configuration
Integrated agent can be used in a network configuration with only a IPv4 environment or in a network configuration with a mix of IPv4 and IPv6 environments. Only IPv4 communication is supported in a network configuration with a mix of IPv4 and IPv6 environments.
You can use integrated agent in the following configurations without a proxy server:
Connection source |
Connection target |
Connection type |
---|---|---|
Fluentd |
JP1/IM agent control base |
No proxy server |
Integrated agent transmits the following:
Connection source |
Connection target |
Transmit data |
---|---|---|
Fluentd |
JP1/IM agent control base |
Sends JSON format data with HTTP POST protocol. The format for specifying JSON format is shown below.
|
- #1
-
For more information, see the description of the message body for the request in the manual JP1/Integrated Management 3-Manager Command, Definition File and API Reference, "5.6.5 JP1 Event-Translation".
- *2
-
For more information, see the description of the message body for the request in section "5.11.3 Write Trend Data" in JP1/Integrated Management 3-Manager Command, Definition File and API Reference manual.