7.2.3 Configuring an SSO cluster system environment (in Linux)
This subsection describes how to configure an SSO cluster system environment on the active and standby nodes in Linux.
The example in this subsection assumes the following settings:
-
Shared disk: /dev/dsk/dg01/vol1
-
Shared directory: /shared/sso
-
Logical IP address: 133.108.120.4
-
JP1 logical host name: rhost
-
Storage locations for cluster control script: $SSO_TMP
- Organization of this subsection
(1) Configuring a cluster environment on the active node
The procedure for configuring a cluster environment on the active node consists of the following steps:
-
Deactivate the resource group
-
Mount the shared disk
-
Set up an SSO cluster environment
-
Set up the JP1-authentication logical host
-
Set an IPv6 logical IP address (if an IPv6 network is to be monitored)
-
Create a cluster control script
-
Add the SSO resource to the resource group
-
Check the cluster software setting file
-
Add the NNMi connection settings
-
Unmount the shared disk
Each step of the above procedure is described below.
(a) Deactivate the resource group
If the resource group is running, deactivate the resource group. (If you are using Veritas Cluster Server, deactivate the cluster software.)
The following shows an example of the command that deactivates the resource group for each type of cluster software:
In Veritas Cluster Server:
hastop -all
In HA Monitor:
- <If no server groups are used>
monend server-alias-name
- <If server groups are used>
-
If the server (resource group) for SSO and the prerequisite server are running, deactivate the servers.
monend alias-name-of-each-server
(b) Mount the shared disk
Make sure that the shared disk is mounted to the active node.
The following shows an example of mounting the shared disk (/dev/dsk/dg01/vol1) for each type of cluster software:
- In Veritas Cluster Server:
# vxdg import dg01 # vxvol -g dg01 start vol1 # mount -t vxfs /dev/dsk/dg01/vol1 /shared
- In HA Monitor:
# vgchange -a y /dev/dsk/dg01 # mount /dev/dsk/dg01/vol1 /shared
(c) Set up an SSO cluster environment
Execute the cluster environment setting command (ssoclustersetup) to set up an SSO cluster environment on the active node. For details about the cluster environment setting command, see ssoclustersetup (Linux only) in 5. Commands.
For the arguments of the cluster environment setting command, specify values as shown in the following table:
Argument |
Value to be specified |
---|---|
First argument |
-construction (specifies that an environment is to be configured) |
Second argument |
-primary (specifies that the active node is the target) |
Third argument |
Shared folder name on the shared disk# |
Fourth argument |
Logical IP address (IPv4 address) |
The following shows an example of the command line. In this example, the command creates the SSO directory in a folder on a shared disk (/shared/sso) and assigns a logical IP address (133.108.120.4) to that directory so that the directory can be used as a shared directory.
ssoclustersetup -construction -primary /shared/sso 133.108.120.4
If an error occurs, correct it with reference to the message that appears. After you have corrected the error, re-execute the command.
(d) Set the JP1-authentication logical host
To adopt JP1 authentication as the authentication method so that JP1 authentication can be performed on the logical host, execute the cluster environment setting command (ssoclustersetup) to set the JP1-authentication logical host in SSO. For details about the cluster environment setting command, see ssoclustersetup (Linux only) in 5. Commands.
For the arguments of the cluster environment setting command, specify values as shown in the following table:
Argument |
Value to be specified |
---|---|
First argument |
-logicalset (specifies the JP1 logical host) |
Second argument |
JP1 logical host name |
The following shows an example of the command that sets rhost as the JP1 logical host name:
ssoclustersetup -logicalset rhost
If an error occurs, correct it with reference to the message that appears. After you have corrected the error, re-execute the command.
(e) Set an IPv6 logical IP address (if an IPv6 network is to be monitored)
To monitor an IPv6 network, execute the cluster environment setting command (ssoclustersetup) to set an IPv6 logical IP address. For details about the cluster environment setting command, see ssoclustersetup (Linux only) in 5. Commands.
For the arguments of the cluster environment setting command, specify values as shown in the following table:
Argument |
Value to be specified |
---|---|
First argument |
-defset (specifies that the action definition file is to be set) |
Second argument |
Logical IP address (IPv6 address) |
The following shows an example of the command that sets 2001:db8::7c as the IPv6 logical IP address:
ssoclustersetup -defset 2001:db8::7c
If an error occurs, correct it with reference to the message that appears. After you have corrected the error, re-execute the command.
(f) Create a cluster control script
Create a cluster control script from the cluster control script sample data provided by SSO.
To create a cluster control script:
-
Copy the cluster control script sample data.
Copy the cluster control script sample data to any directory. The final location of the cluster control script file that you create is the $SSO_TMP directory on the local disk.
The following table lists the location of the cluster control script sample data.
Table 7‒8: Location of the cluster control script sample data No.
Cluster software
Sample file
Directory
1
Veritas Cluster Server
sso_vcs.sh
$SSO_SAMPLE/ha
2
HA Monitor
sso_hamon.sh
The following shows an example of saving the sample file for Veritas Cluster Server in the $SSO_TMP directory, which is the final location of the cluster control script that you create:
# cp -p /opt/CM2/SSO/sample/ha/sso_vcs.sh /var/opt/CM2/SSO/tmp/sso_vcs.sh # chmod 744 /var/opt/CM2/SSO/tmp/sso_vcs.sh
-
Customize the cluster control script.
Customize keys in the cluster control script. The following tables list and describe the keys to be customized by the type of cluster software.
- In Veritas Cluster Server:
-
Table 7‒9: Keys to be customized if the cluster software is Veritas Cluster Server No.
Key name
Default
Description
1
MUSTRUN
ssospmd
Sets the names of the SSO processes to be monitored.
<Basic configuration>
By default, the cluster software monitors only the ssospmd daemon process. To monitor other processes, specify a list of monitoring-target process names by using a single-byte space as a separator.
Note that you cannot omit specification of the ssospmd daemon process from this key. You can change the specification of this key by only adding values.
Example:
MUSTRUN="ssospmd ssoapmon ssocolmng ssorptd"
If you do not want failover to occur when an SSO process stops, set 0 for the Critical key shown in Table 7-11.
<Distributed configuration>
Specify the SSO processes that can trigger a failover in the single-byte space-separated format as shown in the following example:
Example:
MUSTRUN="ssospmd ssoapmon ssocollectd ssocolmng ssorptd ssoconsoled"
If you do not want failover to occur when an SSO process stops, set 1 for the Critical key shown in Table 7-11.
2
TERM_RETRY_COUNT
5
Sets the maximum number of times stopping SSO (ssostop) is retried when the SSO resource becomes offline.
3
TERM_RETRY_INTERVAL
30 (seconds)
Sets the interval at which stopping SSO (ssostop) is retried when the SSO resource becomes offline.
The SSO stop processing (ssostop) might fail, depending on the status of SSO. Therefore, for SSO to stop normally, make sure that the stop processing is retried several times.
- In HA Monitor:
-
Table 7‒10: Keys to be customized if the cluster software is HA Monitor No.
Key name
Default
Description
1
MUSTRUN
--
Sets the names of the SSO processes to be monitored.
<Basic configuration>
By default, no values are set. That is, the cluster software switches the nodes when NNMi fails over without monitoring SSO processes.
If you want failover to occur when an SSO process stops, specify the names of the monitoring-target SSO processes by using a single-byte space as a separator.
Example:
MUSTRUN="ssospmd ssoapmon ssocolmng ssorptd"
<Distributed configuration>
Specify the SSO processes that can trigger a failover in the single-byte space-separated format as shown in the following example:
Example:
MUSTRUN="ssospmd ssoapmon ssocollectd ssocolmng ssorptd ssoconsoled"
2
TERM_RETRY_COUNT
5
Sets the maximum number of times stopping SSO (ssostop) is retried when the SSO resource becomes offline.
3
TERM_RETRY_INTERVAL
30 (seconds)
Sets the interval at which stopping SSO (ssostop) is retried when the SSO resource becomes offline.
The SSO stop processing (ssostop) might fail, depending on the status of SSO. Therefore, for SSO to stop normally, make sure that the stop processing is retried several times.
The following shows an example of the settings that monitor all SSO processes (the parts in bold type). Note that to start ssotrapd in a distributed configuration, ssotrapd must also be added.
# ********************************************************************** # custom env # ********************************************************************** MUSTRUN="ssospmd ssoapmon ssocollectd ssocolmng ssorptd ssoconsoled" TERM_RETRY_COUNT=5 TERM_RETRY_INTERVAL=30
(g) Add the SSO resource to the resource group
The following describes how to add the SSO resource to the resource group for each type of cluster software:
In Veritas Cluster Server:
Edit the VCS setting file (/etc/VRTSvcs/conf/config/main.cf) to add the SSO resource settings to the cluster resource group.
-
Add an application definition.
Add and set the cluster control script as the application that starts, stops, and monitors SSO.
The following table lists and describes the items to be set in the VCS setting file.
Table 7‒11: Items to be set in the VCS setting file No.
Section
Description
Example
1
Application
Specify the name of an application.
jp1cm2sso
2
StartProgram
Command line for starting the cluster control script created in (f) Create a cluster control script
--
3
StopProgram
Command line for stopping the cluster control script created in (f) Create a cluster control script
--
4
CleanProgram
Command line for forcibly stopping the cluster control script created in (f) Create a cluster control script
--
5
MonitorProgram
Command line for monitoring the cluster control script created in (f) Create a cluster control script
--
6
OfflineTimeout#
Timeout for the stop processing (in seconds)
1500
7
CleanTimeout#
Timeout used when an error occurs (in seconds)
1500
8
CloseTimeout#
Timeout used when failover or failback occurs (in seconds)
1500
9
Critical
Specify whether to use the MUSTRUN key in the cluster control script as a trigger to cause failover.
-
0
Does not use the key as a failover trigger
-
1
Uses the key as a failover trigger
Basic configuration: 0
Distributed configuration: 1
The following shows an example of the application agent settings (in bold type):
<Basic configuration>
group jp1cm2nnmi ( Application jp1cm2nnmi( ... ) Application jp1cm2sso ( StartProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -r" StopProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -h" CleanProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -h" MonitorProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -m" Critical = 0 OfflineTimeout = 1500 CleanTimeout = 1500 CloseTimeout = 1500 )
<Distributed configuration>
group jp1cm2sso ( ... Application jp1cm2sso ( StartProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -r" StopProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -h" CleanProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -h" MonitorProgram = "/var/opt/CM2/SSO/tmp/sso_vcs.sh -m" Critical = 1 OfflineTimeout = 1500 CleanTimeout = 1500 CloseTimeout = 1500 )
-
-
Add resource dependencies
<Basic configuration>
For the SSO services that depend on NNMi services, set the dependencies between them in the following format:
SSO-application-name requires NNMi-application-name
The following shows an example of the resource dependency settings (in bold type):
group jp1cm2nnmi ( Application jp1cm2nnmi( ... ) Application jp1cm2sso( ... ) jp1cm2sso requires jp1cm2nnmi
<Distributed configuration>
For the SSO services that depend on the logical IP address (IP resource) and shared disk (Mount resource), set the dependencies in the following format:
SSO-application-name requires logical-IP-address-(IP-resource) SSO-application-name requires shared disk-(Mount-resource)
The following shows an example of the resource dependency settings (in bold type):
group jp1cm2sso ( ... Application jp1cm2sso( ... ) ... jp1cm2sso requires IP_133_108_120_4 jp1cm2sso requires shdsk1
In HA Monitor:
<In the basic configuration in which no server group is used>
Register the cluster control script in the scripts that were created when the NNMi cluster system was configured (/var/opt/OV/hacluster/group-name/cm2_*.sh).
To register the cluster control script in the start script or monitoring script, add the necessary entries after the NNMi commands are executed. To register the cluster control script in the stop script, add the necessary entries before the NNMi commands are executed.
The following shows an example of the settings in the start script (in bold type):
#!/bin/sh RESOURCE_GROUP=jp1cm2nnmi #start JP1/Cm2/NNMi logger -i -t NNMi "NNMi start" /opt/OV/misc/nnm/ha/nnmharg.ovpl NNM -start ${RESOURCE_GROUP} RC=$? logger -i -t NNMi "NNMi start rc=$RC ." if [ $RC -ne 0 ];then exit $RC fi #start JP1/Cm2/SSO logger -i -t SSO "SSO start" /var/opt/CM2/SSO/tmp/sso_hamon.sh -r RC=$? logger -i -t SSO "SSO start rc=$RC ." if [ $RC -ne 0 ];then exit $RC fi exit 0
The following shows an example of the settings in the stop script (in bold type):
#!/bin/sh RESOURCE_GROUP=jp1cm2nnmi #stop JP1/Cm2/SSO logger -i -t SSO "SSO stop " /var/opt/CM2/SSO/tmp/sso_hamon.sh -h RC=$? logger -i -t SSO "SSO stop rc=$RC" if [ $RC -ne 0 ];then exit $RC fi #stop JP1/Cm2/NNMi logger -i -t NNMi "NNMi stop " /opt/OV/misc/nnm/ha/nnmharg.ovpl NNM -stop ${RESOURCE_GROUP} RC=$? logger -i -t NNMi "NNMi stop rc=$RC" if [ $RC -ne 0 ];then exit $RC fi exit 0
The following shows an example of the settings in the monitoring script (in bold type):
#!/bin/sh RESOURCE_GROUP=jp1cm2nnmi MONITOR_INTERVAL=60 # main while true do #monitor JP1/Cm2/NNMi /opt/OV/misc/nnm/ha/nnmharg.ovpl NNM -monitor ${RESOURCE_GROUP} RC=$? if [ $RC -ne 0 ];then logger -i -t NNMi "NNMi monitor rc=$RC ." exit $RC fi #monitor JP1/Cm2/SSO /var/opt/CM2/SSO/tmp/sso_hamon.sh -m RC=$? if [ $RC -ne 0 ];then logger -i -t SSO "SSO monitor rc=$RC ." exit $RC fi sleep $MONITOR_INTERVAL done exit 0
<In the basic configuration in which server groups are used or in a distributed configuration>
On the SSO server (resource group) set in HA Monitor, specify the following settings:
-
Preparing scripts for HA Monitor
Create scripts that start, stop, and monitor SSO from HA Monitor.
Directory to store the scripts: $SSO_TMP
The following shows an example of the SSO start script (sso_start.sh):
#!/bin/sh #start JP1/Cm2/SSO logger -i -t SSO "SSO start" /var/opt/CM2/SSO/tmp/sso_hamon.sh -r RC=$? logger -i -t SSO "SSO start rc=$RC ." if [ $RC -ne 0 ];then exit $RC fi exit 0
The following shows an example of the SSO stop script (sso_stop.sh):
#!/bin/sh #stop JP1/Cm2/SSO logger -i -t SSO "SSO stop " /var/opt/CM2/SSO/tmp/sso_hamon.sh -h RC=$? logger -i -t SSO "SSO stop rc=$RC" if [ $RC -ne 0 ];then exit $RC fi exit 0
The following shows an example of the SSO monitoring script (sso_monitor.sh):
#!/bin/sh MONITOR_INTERVAL=60 # main while true do #monitor JP1/Cm2/SSO /var/opt/CM2/SSO/tmp/sso_hamon.sh -m RC=$? if [ $RC -ne 0 ];then logger -i -t SSO "SSO monitor rc=$RC ." exit $RC fi sleep $MONITOR_INTERVAL done exit 0
When you have created the scripts, set the owner, group, and permission for the scripts. The following shows an example of specifying the settings:
cd /var/opt/CM2/SSO/tmp chown root:sys sso_start.sh sso_stop.sh sso_monitor.sh chmod 755 sso_start.sh sso_stop.sh sso_monitor.sh
-
Edit the server-based environment setting definition file (servers)
Specify the script files you created (in Preparing scripts for HA Monitor) for the appropriate operands in the servers file. That is, specify the start script for the name or actcommand operand, the stop script for the termcommand operand, and monitoring script for the patrolcommand operand.
Path: /opt/hitachi/HAmon/etc/servers
<In a configuration in which no server group is used>
server name /var/opt/CM2/SSO/tmp/sso_start.sh , alias <resource_group> , acttype monitor , initial online , disk /dev/vg06 , termcommand /var/opt/CM2/SSO/tmp/sso_stop.sh , fs_name /dev/mapper/vg06-lvol1 , fs_mount_dir /shared , fs_umount_retry 10 , patrolcommand /var/opt/CM2/SSO/tmp/sso_monitor.sh , servexec_retry 0 , waitserv_exec yes , retry_stable 60 , lan_updown use ;
- Note:
-
The initial operand specifies the type of the server to be started. To start the server as the active node, specify online. To start the server as the standby node, specify standby.
For the disk, fs_name, and fs_mount_dir operands, specify values that are appropriate for the target system.
The <resource_group> specifies the resource group name.
<In a configuration in which server groups are used>
server name <any-name-for-the-SSO-resource> , actcommand /var/opt/CM2/SSO/tmp/sso_start.sh , alias <resource_group> , acttype monitor , initial online , termcommand /var/opt/CM2/SSO/tmp/sso_stop.sh , patrolcommand /var/opt/CM2/SSO/tmp/sso_monitor.sh , servexec_retry 0 , waitserv_exec yes , retry_stable 60 , group <group_name> , lan_updown nouse , parent <alias-of-the-underlying-server> ;
- Note:
-
The initial operand specifies the type of the server to be started. To start the server as the active node, specify online. To start the server as the standby node, specify standby.
The <resource_group> specifies the resource group name.
For <group_name>, specify the same name for the servers that fail over together.
The parent operand specifies the underlying dependency. Specify the alias of the underlying server.
In the basic configuration, specify a dependency indicating that SSO depends on NNMi.
In a distributed configuration, specify a dependency indicating that SSO depends on a shared disk and either a server with a logical IP address or a resource server. The underlying shared disk, a server with a logical IP address, and a resource server must have been set.
(h) Check the cluster software setting file
The following describes how to check the cluster software setting file for each type of cluster software:
- In Veritas Cluster Server:
-
Execute the following command to verify the syntax of the VCS setting file:
hacf -verify path-of-the-directory-containing-the-VCS-setting-file
- In HA Monitor:
-
If the cluster software is HA Monitor, you do not need to perform any tasks.
(i) Add the NNMi connection settings
Execute the ssonnmsetup command to add the NNMi connection settings. For details on the ssonnmsetup command, see ssonnmsetup in 5. Commands.
(j) Unmount the shared disk
Unmount the shared disk that was mounted in (b) Mount the shared disk.
The following describes how to unmount the shared disk for each type of cluster software:
- In Veritas Cluster Server:
# umount /shared # vxdg deport dg01
- In HA Monitor:
# umount /shared # vgchange -a n /dev/dsk/dg01
(2) Configuring a cluster environment on the standby node
The procedure for configuring a cluster environment on the standby node consists of the following steps:
-
Set up an SSO cluster environment
-
Create a cluster control script
-
Add the SSO resource to the resource group
Each step of the above procedure is described below.
(a) Set up an SSO cluster environment
Execute the cluster environment setting command (ssoclustersetup) to set up an SSO cluster environment on the standby node. For details about the cluster environment setting command, see ssoclustersetup (Linux only) in 5. Commands.
For the arguments of the cluster environment setting command, specify values as shown in the following table:
Argument |
Value to be specified |
---|---|
First argument |
-construction (specifies that an environment is to be configured) |
Second argument |
-secondary (specifies that the standby node is the target) |
Third argument |
Shared folder name on the shared disk |
The following shows an example of the command line:
ssoclustersetup -construction -secondary /shared/sso
If an error occurs, correct it with reference to the message that appears. After you have corrected the error, re-execute the command.
(b) Create a cluster control script
Copy a cluster control script and customize it by using the procedures in (1)(f) Create a cluster control script. Make sure that the contents and path of the script file are same as those on the active node.
(c) Add the SSO resource to the resource group
Add the SSO resource to the resource group with reference to (1)(g) Add the SSO resource to the resource group.
Make sure that the cluster software settings specified on the standby node are the same as those specified on the active node.
For HA Monitor, however, on the standby node, standby must be specified for the initial operand in the server-based environment setting definition file (servers).
When the above settings have been specified, the resource group can be activated.
Use the procedure below to activate the resource group on the active node (or start the cluster software on the active and standby nodes, if it is Veritas Cluster Server). For details about the commands used in the procedure, see the documentation for the cluster software.
In Veritas Cluster Server:
-
Start the cluster software on the active and standby nodes.
# hastart
-
If the resource group is not activated, execute the following command to activate the resource group on the active node:
# hagrp -online resource-group-name -sys active-host-name
In HA Monitor:
If there is an underlying server required to start the SSO server (resource group), start the underlying server, and then execute the following command to activate the resource group:
# monbegin resource-group-name