Hitachi

JP1 Version 12 JP1/SNMP System Observer Description, Operator's Guide and Reference


7.4.3 Releasing the SSO cluster system environment (in Linux)

This subsection describes how to release the SSO cluster system environment on the active and standby nodes in Linux.

Organization of this subsection

(1) Releasing the cluster system environment on the active node

The procedure for releasing the cluster system environment on the active node consists of the following steps:

  1. Delete the database

  2. Deactivate the resource group

  3. Delete SSO from the resource group

  4. Check the cluster software setting file

  5. Delete the cluster control script

  6. Undo SSO cluster setup

Each step of the above procedure is described below.

(a) Delete the database

Execute the following command to delete the database:

ssodbdel -all

(b) Deactivate the resource group

Deactivate the resource group (if the cluster software is Veritas Cluster Server, stop it).

The following shows a command execution example for each type of cluster software:

In Veritas Cluster Server:

# hastop -all

In HA Monitor:

<If no server groups are used>
# monend server-alias-name
<If server groups are used>

Deactivate the server (resource group) for SSO, but do not deactivate the prerequisite server.

# monend SSO-server-alias-name

(c) Delete SSO from the resource group

Delete SSO from the resource group.

The following shows the procedure for deleting SSO for each type of cluster software:

In Veritas Cluster Server:

Delete the SSO settings related to the application agent and resource dependencies from the VCS setting file (/etc/VRTSvcs/conf/config/main.cf).

In HA Monitor:

<In the basic configuration in which no server group is used>

Delete all of the SSO definitions that were added to the script that was created when the NNMi cluster was configured (/var/opt/OV/hacluster/group-name/cm2_*.sh).

<In the basic configuration in which server groups are used or in a distributed configuration>

Perform the following procedure:

  1. Delete the scripts created by a user when the SSO cluster was set up (sso_start.sh, sso_stop.sh, and sso_monitor.sh in $SSO_TMP).

  2. Edit the server-based environment setting definition file (/opt/hitachi/HAmon/etc/servers) to delete the SSO-related settings that were added.

(d) Check the cluster software setting file

The following describes how to check the cluster software setting file for each type of cluster software:

In Veritas Cluster Server:

Execute the following command to verify the VCS setting file:

# hacf -verify /etc/VRTSvcs/conf/config

If errors are detected by this command, correct the VCS setting file, and then execute the command again until all errors are corrected.

In HA Monitor:

There is no applicable procedure.

(e) Delete the cluster control script

Delete the cluster control script that was created when the environment was configured.

Note, however, that if you reconfigure the cluster system when renaming the logical host, you can reuse the cluster control script.

(f) Undo SSO cluster setup

If you switch the SSO execution server to the active physical host or rename the logical host in the basic configuration, undo SSO cluster setup to restore the previous settings when reconfiguring the cluster.

Note that in a configuration that uses the HA Monitor server group, when you use the ssoclustersetup command in the following procedure to undo cluster setup on the active node, the underlying resource group is running. This means that the shared disk can be accessed from the active node. Therefore, you do not need to perform steps 1 (Mount the shared disk) and 3 (Unmount the shared disk). You only have to execute the ssoclustersetup command in step 2 (Undo SSO cluster setup).

To undo SSO cluster setup:

  1. Mount the shared disk.

    Mount the shared disk to the active node.

    This procedure is an example that assumes the following settings:

    • Absolute path of the shared disk used by SSO: /dev/dsk/dg01/vol1

    • Directory that SSO uses on the shared disk: /shared/sso

    The following shows an example of the command that mounts the shared disk for each type of cluster software:

    In Veritas Cluster Server:

    # vxdg import dg01
    # vxvol -g dg01 start vol1
    # mount -t vxfs /dev/dsk/dg01/vol1 /shared

    In HA Monitor:

    # vgchange -a y /dev/dsk/dg01
    # mount /dev/dsk/dg01/vol1 /shared
  2. Undo SSO cluster setup.

    Execute the cluster environment setting command (ssoclustersetup) to undo SSO cluster setup on the active node. For details about the cluster environment setting command, see ssoclustersetup (Linux only) in 5. Commands.

    For the arguments of the cluster environment setting command, specify values as shown in the following table:

    Argument

    Value to be specified

    First argument

    -release (specifies that unsetup is to be performed)

    Second argument

    -primary (specifies that the active node is the target)

    Third argument

    Shared folder name on the shared disk

    Example: /shared/sso

    The following shows an example of the command that undoes SSO cluster setup:

    ssoclustersetup -release -primary /shared/sso

    If an error occurs, correct it with reference to the message that appears. After you have corrected the error, re-execute the command.

  3. Unmount the shared disk.

    Unmount the shared disk that was mounted in step 1. The following shows an example of the command to be executed for each type of cluster software:

    In Veritas Cluster Server:

    # umount /shared
    # vxdg deport dg01

    In HA Monitor:

    # umount /shared
    # vgchange -a n /dev/dsk/dg01

(2) Releasing the cluster system environment on the standby node

The procedure for releasing the cluster system environment on the standby node consists of the following steps:

  1. Delete SSO from the resource group

  2. Delete the cluster control script

  3. Undo SSO cluster setup

Each step of the above procedure is described below.

(a) Delete SSO from the resource group

Delete SSO from the resource group.

The following shows the procedure for deleting SSO from the resource group for each type of cluster software:

In Veritas Cluster Server:

Delete the SSO settings related to the application agent and resource dependencies from the VCS setting file (/etc/VRTSvcs/conf/config/main.cf).

In HA Monitor:

<In the basic configuration in which no server group is used>

Delete all of the SSO definitions that were added to the script that was created when the NNMi cluster was configured (/var/opt/OV/hacluster/group-name/cm2_*.sh).

<In the basic configuration in which server groups are used or in a distributed configuration>

Perform the following procedure:

  1. Delete the scripts created by a user when the SSO cluster was set up (sso_start.sh, sso_stop.sh, and sso_monitor.sh in $SSO_TMP).

  2. Edit the server-based environment setting definition file (/opt/hitachi/HAmon/etc/servers) to delete the SSO-related settings that were added.

(b) Delete the cluster control script

Delete the cluster control script that was created when the environment was configured.

Note, however, that if you reconfigure the cluster system when renaming the logical host, you can reuse the cluster control script.

(c) Undo SSO cluster setup

When you change the SSO operating environment of the standby node from the logical host to the physical host, you need to reconfigure the cluster. In this case, you can inherit the previous settings by undoing SSO cluster setup on the standby node.

Note that, in a configuration that uses the HA Monitor server group, when you use the ssoclustersetup command in the following procedure to undo cluster setup on the standby node, the underlying resource group runs on the active node. This procedure causes the shared disk to become inaccessible from the standby node. For this reason, perform a failover operation in step 1 (Mount the shared disk) and a failback operation in step 3 (Unmount the shared disk). Step 2 (Undo SSO cluster setup) has not changed.

To undo SSO cluster setup:

  1. Mount the shared disk.

    Mount the shared disk to the standby node.

    For details about the procedure, see step 1 (Mount the shared disk) in (1)(f) Undo SSO cluster setup.

  2. Undo SSO cluster setup.

    Execute the cluster environment setting command (ssoclustersetup) to undo SSO cluster setup on the standby node.

    For the arguments of the cluster environment setting command, specify values as shown in the following table:

    Argument

    Value to be specified

    First argument

    -release (specifies that unsetup is to be performed)

    Second argument

    -secondary (specifies that the standby node is the target)

    Third argument

    Shared folder name on the shared disk

    The following shows an example of the command that undoes SSO cluster setup:

    ssoclustersetup -release -secondary /shared/sso

    If an error occurs, correct it with reference to the message that appears. After you have corrected the error, re-execute the command.

  3. Unmount the shared disk.

    For details about the procedure, see step 3 (Unmount the shared disk) in (1)(f) Undo SSO cluster setup.

(3) Delete the SSO shared data

After you have undone SSO cluster setup on the active and standby nodes, delete the SSO shared data directory that was created on the shared disk when the environment was configured (from the active node).

Note that, in a configuration that uses the HA Monitor server group, in the following procedure, the underlying resource group runs on the active node. Therefore, you do not need to perform steps 1 (Mount the shared disk) and 3 (Unmount the shared disk). Perform only step 2 (Delete the SSO Directory) on the active node.

To delete the SSO shared data directory:

  1. Mount the shared disk.

    Mount the shared disk to the active node. The following shows a command execution example for each type of cluster software:

    In Veritas Cluster Server:

    # vxdg import dg01
    # vxvol -g dg01 start vol1
    # mount -t vxfs /dev/dsk/dg01/vol1 /shared

    In HA Monitor:

    # vgchange -a y /dev/dsk/dg01
    # mount /dev/dsk/dg01/vol1 /shared
  2. Delete the SSO directory.

    Delete the SSO shared directory that has been created on the shared disk.

    # rm -r /shared/sso
  3. Unmount the shared disk.

    Unmount the shared disk that was mounted in step 1. The following shows a command execution example for each type of cluster software:

    In Veritas Cluster Server:

    # umount /shared
    # vxdg deport dg01

    In HA Monitor:

    # umount /shared
    # vgchange -a n /dev/dsk/dg01

(4) Activate the NNMi resource group (in the basic configuration only)

Activate the NNMi resource group (if the cluster software is Veritas Cluster Server, start it).

The following shows a command execution example for each type of cluster software:

In Veritas Cluster Server:

  1. Start the cluster software on the active and standby nodes.

    # hastart
  2. If the NNMi resource group is not operating, activate it on the active node by executing the following command:

    # hagrp -online NNMi-resource-group-name -sys active-host-name

In HA Monitor:

In a configuration in which no server group is used, execute the following command:

# monbegin server-alias-name

(5) Delete the resource group (in the distributed configuration only)

On the active and standby nodes, if the resource group created by a user for SSO includes other JP1 resources (such as JP1/Base), delete the JP1 resources. Then, delete the resource group created by a user for SSO.