1.22.1 Creating container images
- Organization of this subsection
(1) Conditions for the base container image
For details about the conditions for base container images for Docker image and Podman image, see 14.3.12(4)(b) Container images assumed to be and 14.3.12(5)(b) Container images assumed to be in the JP1/Integrated Management 3 - Manager Overview and System Design Guide, respectively.
(2) How to create Docker and Podman image
-
Create a working directory on Docker host or Podman host where you want to create the image.
-
Prepare the following items under the working directory:
-
Dockerfile File
-
JP1/IM - Agent install package
-
Supervisord difinition file (shown as using Supervisor)
-
Startup script
-
Credentials File (if Yet another cloudwatch exporter is used)
-
File to be placed in conf directory.
Here, the directory configuration is as follows.
Place files to be updated among JP1/IM - Agent definition files in conf directory.
Working-directory/#1 + Dockerfile#2 + JP1IMA/#3 | + X64LIN/ | + h_inst | + ppmanage | + setup | + PCC842C/ | + 9GDL/ | +... + supervisord.conf#4 + start.sh#5 + credentials#6 + conf/ + jpc_imagentcommon.json.model#7 + jpc_alerting_rules.yml.model#8 + jpc_ya_cloudwatch_exporter.yml.model#9 + jpc_blackbox_exporter.yml.model#10 + jpc_file_sd_config_blackbox_http.yml.model#11 + jpc_file_sd_config_blackbox_icmp.yml.model#12 + jpc_process_exporter.yml#13 + jpc_script_exporter.yml#14 + promitor/#15 | + scraper/#16 | | + runtime.yaml#17 | | + metrics-declaration.yaml#18 | + resource-discovery/#19 | + runtime.yaml#20 | + resource-discovery-declaration.yaml#21 + user/ + cert/ + XXXX.crt#22
#1: The working directory created in step 1
#2: Dockerfile File
#3: The directories and files where JP1/IM - Agent installation package was extracted
#4: supervisord difinition file
#5: Start script
#6: AWS credential file (when Yet another cloudwatch exporter is used)
#7: imagent common configuration file
#8: alert configuration file (when defining alerts)
#9: Yet another cloudwatch exporter configuration file
#10: Blackbox exporter configuration file
#11: Blackbox (HTTP/HTTPS monitoring) discovery configuration file
#12: Blackbox (ICMP monitoring) discovery configuration file
#13: process exporter configuration file (If changing the settings)
#14: script exporter configuration file (If changing the settings)
#15: promitor configuration file directory (If changing the settings)
#16: promitor scraper configuration file directory (If changing the settings)
#17: promitor scraper configuration file (If changing the settings)
#18: promitor scraper configuration file (If changing the settings)
#19: promitor resource discovery configuration file directory (If changing the settings)
#20: promitor resource discovery configuration file (If changing the settings)
#21: promitor resource discovery configuration file (If changing the settings)
#22: CA certificates for connecting to Integrated manager host
The build procedure to connect to Azure with ServicePrincipal is as follows.
1. In the build procedure described in Building a container environment for integrated agents for JP1/IM - Agent, perform the steps other than step 4 in 1.21.2(8) Set up of Promitor > (b) Configuring authentication information for connecting to Azure and then create the files under the working directory.
2. After starting the container, perform step 4 in 1.21.2(8) Set up of Promitor > (b) Configuring authentication information for connecting to Azure.
3. Start the Promitor service. For details, see Building a container environment for integrated agents for service control.
- Notes on Difinition file
Do not specify IP address for definition File. Also, do not include IP address in the certificate File. If you include IP address data, you cannot handle containers that dynamically change IP address.
- Sample of Dockerfile File
Set the installer environment variables as needed, as shown in the ENV line in the example below.
FROM oraclelinux:8 RUN mkdir /root/.aws COPY credentials /root/.aws/credentials COPY JP1IMA/X64LIN /tmp/X64LIN COPY start.sh /opt/start.sh COPY supervisord.conf /opt/supervisord.conf RUN dnf install -y python3 RUN pip3 install supervisor RUN dnf install -y cpio RUN chmod a+x /opt/start.sh ENV JP1IMAGENT_INSTALL_MODE "image" ENV JP1IMAGENT_IMMGR_HOST "Set the host name of the destination manager" ENV JP1IMAGENT_IMMGR_INITIAL_SECRET "Set the initial secret" ENV JP1IMAGENT_ADDON_BLACKBOX_EXPORTER_ACTIVE "yes" ENV JP1IMAGENT_ADDON_YA_CLOUDWATCH_EXPORTER_ACTIVE "yes" RUN /tmp/X64LIN/setup -f -k P-CC842C-9GDL /tmp/ COPY conf /opt/jp1ima/conf CMD [ "/bin/bash", "-c", "/opt/start.sh" ]
- Creation Sample of service definition file of supervisord
[unix_http_server] file=/tmp/supervisor.sock ; the path to the socket file [supervisord] logfile=/tmp/supervisord.log logfile_maxbytes=5MB logfile_backups=10 loglevel=info pidfile=/tmp/supervisord.pid nodaemon=true [program:imagent] command=/opt/jp1ima/bin/imagent directory=/opt/jp1ima/bin/ stopasgroup=true autostart=true stopwaitsecs=180 [program:imagentaction] command=/opt/jp1ima/bin/imagentaction directory=/opt/jp1ima/bin/ stopasgroup=true autostart=true stopwaitsecs=180 [program:imagentproxy] command=/opt/jp1ima/bin/imagentproxy directory=/opt/jp1ima/bin/ stopasgroup=true autostart=true stopwaitsecs=180 [program:prometheus] command=command-line#1 directory=/opt/jp1ima/bin#2 autostart=true#3 stopasgroup=true stopwaitsecs=180 [program:alertmanager] command=command-line#1 directory=/opt/jp1ima/bin#2 autostart=true#3 stopasgroup=true stopwaitsecs=180 [program:node_exporter] command=command-line#1 directory=/opt/jp1ima/bin#2 autostart=true#3 stopasgroup=true stopwaitsecs=180 [program:program-name] command= command-line#1 directory=/opt/jp1ima/bin#2 autostart=true#3 stopasgroup=true#4 stopwaitsecs=180
NOTE No description is required for services that are not used.
#1: For command-line, use Value of ExecStart listed in jpc_xxxxxxx.service stored in /usr/lib/systemd/system directory of Physical host.
#2: When JP1/IM - Agent program is run from the service-management program, specify /opt/jp1ima/bin as the current directory.
#3: Select Enable for auto start.
#4: Set stopasgroup to true.
- Startup Script Example
#!/bin/bash /opt/JP1ima/tools/jimasetup container#1 mv -f /opt/jp1ima/conf/jpc_file_sd_config_blackbox_http.yml \ /opt/jp1ima/conf/jpc_file_sd_config_off#2 mv -f /opt/jp1ima/conf/jpc_file_sd_config_blackbox_icmp.yml \ /opt/jp1ima/conf/jpc_file_sd_config_off#2 mv -f /opt/jp1ima/conf/jpc_file_sd_config_cloudwatch.yml \ /opt/jp1ima/conf/jpc_file_sd_config_off#2 mv -f /opt/jp1ima/conf/jpc_file_sd_config_process.yml \ /opt/jp1ima/conf/jpc_file_sd_config_off#2 mv -f /opt/jp1ima/conf/jpc_file_sd_config_promitor.yml \ /opt/jp1ima/conf/jpc_file_sd_config_off#2 exec /usr/local/bin/supervisord -c /opt/supervisord.conf#3
#1: Execute initial setting command of JP1/IM - Agent with container option-specification.
#2: For the following services, move the discovery configuration file corresponding to the unused services from /opt/jp1ima/conf directory to /opt/jp1ima/conf/jpc_file_sd_config_off directory:
Service
Discovery configuration file
prometheus_server
None
alertmanager
None
windows_exporter
jpc_file_sd_config_windows.yml
blackbox_exporter
-
jpc_file_sd_config_blackbox_http.yml
-
jpc_file_sd_config_blackbox_icmp.yml
ya_cloudwatch_exporter
jpc_file_sd_config_cloudwatch.yml
fluentd
None
promitor
jpc_file_sd_config_promitor.yml
script_exporter
None
#3: Execute the service management tools.
- Example of imagent common configuration file
An example of the model file (jpc_imagentcommon.json.model) of the imagent common configuration file is as follows:
If you use TLS, set the path of the CA certificate to the ca_file. If you don't want to use TLS, delete the tls_config item.
{ "JP1_BIND_ADDR": "ANY", "COM_LISTEN_ALL_ADDR": 0, "COM_MAX_LISTEN_NUM": 4, "JP1_CLIENT_BIND_ADDR": "ANY", "http": { "max_content_length": 10, "client_timeout": 30 }, "immgr": { "host": "@@immgr.host@@", "proxy_url": "@@immgr.proxy_url@@", "proxy_user": "@@immgr.proxy_user@@", "tls_config": { "ca_file": "/opt/jp1ima/conf/user/cert/XXXX.crt", "insecure_skip_verify": false, "min_version": "TLSv1_2" }, "imbase": { "port": @@immgr.imbase_port@@ }, "imbaseproxy": { "port": @@immgr.imbaseproxy_port@@ } } }
-
-
Navigate to the directory where the Dockerfile resides and launch a Docker or Podman build.
-
For Docker
# docker build -t Docker-image-name:tag .
-
For Podman
# podman build -t Podman-image-name:tag .
-
(3) How to Create Docker and Podman Containers
-
Launch Docker or Podman containers.
-
For Docker
# docker container run --ulimit nofile=65536:65536 -add-host=Manager-Host-name:IP-address -d -h Host-name-for-the-container Docker-Image-Name:Tag
-
For Podman
# podman container run --ulimit nofile=65536:65536 -add-host=Manager-Host-name:IP-address -d -h Host-name-for-the-container Podman-Image-Name:Tag
- Important
-
Host name of the containers is displayed in integrated operation viewer tree. If you omit the -h option, we recommend that you specify Host name because it will automatically Setup Host name by Docker or Podman and that Host name will appear in integrated operation viewer tree.
-
-
Check the log of the service management tools and the log of JP1/IM - Agent to make sure that it is running in Normal.
If Error is printed or if the process you want to run is not running, identify the reason and recreate Docker or Podman image.
-
Refresh IM management node tree.
For details, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).
(4) How to delete Docker and Podman Containers
-
Stop Docker or Podman containers.
-
For Docker
# docker container stop container-name
-
For Podman
# podman container stop container-name
-
-
Delete a Docker or Podman container.
-
For Docker
# docker container rm container-name
-
For Podman
# podman container rm container-name
-
-
Refresh IM management node tree.
For details, see 1.21.2(16) Creation and import of IM management node tree data (for Windows) (mandatory).
(5) About the impact on kernel parameters
For the upper limit of the number of File descriptors, specify 65536 in --ulimit option when starting containers.
For details, see 1.22.1(3) How to Create Docker and Podman Containers.