1.3.2 Managing the log file size
One of the factors that can cause insufficient disk capacity is an increase in the size of log files.
In the case of JP1/IM and JP1/Base, if you estimate the log file size in advance, there is no need to consider the possibility of increasing the log file size. This is because JP1/IM and JP1/Base use a method that outputs log files by switching between multiple log files.
For the OS and other products on the same host, check their specifications and make sure that their log file size will not increase.
- Organization of this subsection
(1) Checking the log output by Intelligent Integrated Management Database
Depending on the PostgreSQL configuration, the following Intelligent Integration Management database logs are output. For information about setting PostgreSQL, see "PostgreSQL configuration file (postgresql.conf)" (2. definition file) in "JP1/Integrated Management 3 - Manager Command, Definition File and API Reference" manual.
-
Storage destination
- In Windows
-
Storage destination of Intelligent Integrated Management Database data file#\log
- In UNIX
-
Storage destination of Intelligent Integrated Management Database data file#/log
- #
-
See "2.7.1(1)(d) Storage destination of related files" in "JP1/Integrated Management 3 - Manager Overview and System Design Guide" manual.
-
File name
postgresql-%a#.log
- #
-
"%a" contains three letters representing the day of the week in the GMT time zone (Mon, Toue, Wed, Thu, Fri, Sat, or Sun).
-
Contents of the output
The format of the log output line is as follows:
YYYY-MM-DD HH:MM:SS.FFF GMT [Process ID] SQL Statement Logging Message
In the "YYYY-MM-DD HH:MM:SS.FFF GMT [Process ID] SQL Statement" part, the timestamp, process ID, and SQL statement of the GMT time zone are output as prefixes.
-
Estimating the size of the log file
There is no upper limit on the size of the log file.
The following is an estimate of the log output size in normal operation:
- Log output size in normal operation (one week's)
-
The output size of the log for one day x 7 (day)
- Log output size for one day
-
= Log output size per 1 sample#1 x scrapes per day#2 x samples collected from every Exporter by a scrape#3
- #1
-
Indicates the output size of the log output by writing trend data.
For JP1/IM - Agent, 50 bytes are assumed.
- #2
-
For JP1/IM - Agent, calculate based on scrape interval specified in Prometheus configuration file (jpc_prometheus_server.yml) scrape_interval. For more information, see description of scrape_interval in "Prometheus configuration file (jpc_prometheus_server.yml)" (2. Definition File) in "JP1/Integrated Management 3 - Manager Command, Definition File and API Reference" manual.
If the specified scrape interval is 1 m (1 minute), it will be 1440 times (60 minutes x 24 hours).
- #3
-
Indicates the sum of the number of metrics specified in the metric definition file for each Exporter targeted by the monitoring agent.
For details about metric definition file of each Exporters supported by JP1/IM - Agent, see description of metric definition file of each Exporters in "JP1/Integrated Management 3 - Manager Command, Definition File, and API Reference" (2. Definition File).
In the case of Linux, you can reduce the disk space of log files by creating the following script (compressing or deleting logs for a certain period of time) # and running it regularly (once per day).
# Place it in the "/etc/cron.daily" directory.
Example of a shell script that compresses logs on a daily basis and deletes them on a 30-day basis:
#!/usr/bin/bash LOGDIR=/var/opt/jp1imm/database/imgndb/log LOGSAVE=/var/opt/jp1imm/log/imgndb COMPRESS_DAY=1 REMOVE_DAY=30 COMPRESS_CMD=gzip COMPRESS_MTIME=`expr $COMPRESS_DAY - 1` #Search for ".log" files whose modification date is "$COMPRESS_MTIME" or later COMPRESS_FILE=`find $LOGDIR -name '*.log' -daystart -type f -mtime +$COMPRESS_MTIME` #Add the date to the end of the file to be compressed and compress it. if [ "$COMPRESS_FILE" != "" ] then for i in $COMPRESS_FILE do if [ -f ${i} ] then mv ${i} ${i}.`date '+%Y%m%d'` $COMPRESS_CMD ${i}.`date '+%Y%m%d'` fi done fi #Search for ".gz" files MV_FILE=`find $LOGDIR -name '*.gz'` #Move compressed logfiles to "$LOGSAVE" if [ "$MV_FILE" != "" ] then for i in $MV_FILE do if [ -f ${i} ] then mv ${i} $LOGSAVE fi done fi REMOVEMTIME=`expr $REMOVE_DAY - 1` #Search for ".gz" files whose modification date is "$REMOVEMTIME " or later REMOVE_FILE=`find $LOGSAVE -name 'postgresql-*.gz' -daystart -type f -mtime +$REMOVEMTIME` #Delete the file to be deleted if [ "$REMOVE_FILE" != "" ] then for i in $REMOVE_FILE do if [ -f ${i} ] then rm -f ${i} fi done fiWhen the above script is executed and operated, the estimated disk space of the log file will be approximately 20GB# when the number of monitored items is 500.
- #
-
The sum of the sizes of each of the following log files:
- The size of the log file of the day: about 4GB
- Size of the log file of the previous day (before compression): about 4GB
- The size of the log file after compression (400MB) x 30 days: about 12GB
Intelligent Integrated Management Database log is extremely large, so the data collection tool# cannot collect it. If your case corresponds to the case described in " 12.3.1(1)(b) JP1 information" or " 12.3.1(2)(b) JP1 information", it must be collected individually by hand.
- #
-
For more information, see "jim_log.bat (Windows Only)" (1. Command) and "jim_log.sh (UNIX Only)" (1. Command) in "JP1/Integrated Management 3 - Manager Command, Definition File and API Reference" manual.