Nonstop Database, HiRDB Version 9 System Operation Guide

[Contents][Index][Back][Next]

26.2.7 VERITAS Cluster Server preparations

We recommend that you read this subsection if you are using VERITAS Cluster Server as your cluster software.

To set up an environment for VERITAS Cluster Server, see the explanation in this subsection and the documentation for VERITAS Cluster Server. For more information about how to set up a VERITAS Cluster Server environment, see the documentation for VERITAS Cluster Server.

Organization of this subsection
(1) Groups and resources
(2) HiRDB resource type definition
(3) Agent definition pre-preparation
(4) Agent definition
(5) Creating an environment setup file

(1) Groups and resources

The unit that VERITAS Cluster Server uses for switching systems between nodes is called a group. The applications comprising a group operate using what are referred to as resources. The following types of resources are available:

(a) Setting up groups and resources

Provide NICs, logical IP addresses, and a shared disk, and set them up as VERITAS Cluster Server resources to form a group. In the explanation in this manual, VERITAS Volume Manager is used to set up a shared disk. When these resources are set up, resource types that have already been defined (such as NIC type, IP type, and DiskGroup) can be used.

You also need to set up HiRDB as a resource within the group so that it can be managed by VERITAS Cluster Server. For this purpose, define a new resource type for HiRDB. For details about how to define a resource type, see 26.2.7(2) HiRDB resource type definition.

The resource type name is HiRDB_S for a HiRDB single server configuration and HiRDB_P for a HiRDB parallel server configuration. This manual uses the generic notation HiRDB_x for resource type names. Substitute HiRDB_S or HiRDB_P as appropriate.

(b) Defining the parent-child relationships of resources

Parent-child relationships are defined for the resources defined within a group. For HiRDB to run, logical IP addresses (when IP addresses are inherited) and the shared disk must be enabled. Therefore, HiRDB_x type resources must be defined as parent resources to the IP type and DiskGroup type resources. The following figure shows a group configuration.

Figure 26-41 Group configuration

[Figure]

(c) Dummy file (applicable to the monitor mode only)

If HiRDB terminates abnormally when the system switchover facility is operating in the monitor mode, HiRDB will restart. For this reason, it is not necessary to monitor the operating status of HiRDB. However, you must create a dummy file for notifying VERITAS Cluster Server that HiRDB is currently active as a resource. This dummy file must satisfy all of the following conditions.

Conditions
  • The dummy file is created when HiRDB is started by VERITAS Cluster Server.
  • The dummy file is deleted when HiRDB is terminated by VERITAS Cluster Server.
  • While the dummy file is in existence, the HiRDB resource is considered to be running.
    Hint
    If the dummy file is deleted inadvertently, VERITAS Cluster Server assumes that an error has occurred in the resource and switches systems. To prevent this, specify Critical=0 as the resource attribute value for the HiRDB_x type resource. Also, to prevent the dummy file from being deleted inadvertently, create it using the name $PDDIR/.pdveritas.
(d) Notes

(2) HiRDB resource type definition

To set up HiRDB as a resource, you must define resource type HiRDB_x for HiRDB. When you create a new resource type, you must also define an agent to monitor the resource. For details about how to define agents, see 26.2.7(3) Agent definition pre-preparation.

(a) HiRDB single server configuration

The following example shows a resource type definition for a HiRDB single server configuration:

 
type HiRDB_S (
     static str ArgList[] = { PdDir, PdConfPath, Ld_Library_Path, DummyFilePath }
     str PdDir
     str PdConfPath
     str Ld_Library_Path
     str DummyFilePath
)
 

Create a file containing the above definition under the name /etc/VRTSvcs/conf/config/HiRDB_STypes.cf.

(b) HiRDB parallel server configuration

The following example shows a resource type definition for a HiRDB parallel server configuration:

 
type HiRDB_P (
     static str ArgList[] = { PdDir, PdConfPath, Ld_Library_Path, DummyFilePath }
     str PdDir
     str PdConfPath
     str Ld_Library_Path
     str DummyFilePath
)
 

Create a file containing the above definition under the name /etc/VRTSvcs/conf/config/HiRDB_PTypes.cf.

(3) Agent definition pre-preparation

Define an agent for the newly-created resource type HiRDB_x. This subsection explains how to define an agent using a shell script. Before defining an agent, take the following pre-preparation step.

(4) Agent definition

Define the action details of the agents as listed in the following table.

Table 26-14 Agent action definition items and file names

Agent action HiRDB type Script file name
Bringing a resource online HiRDB single server configuration /opt/VRTSvcs/bin/HiRDB_S/online
HiRDB parallel server configuration /opt/VRTSvcs/bin/HiRDB_P/online
Taking a resource offline HiRDB single server configuration /opt/VRTSvcs/bin/HiRDB_S/offline
HiRDB parallel server configuration /opt/VRTSvcs/bin/HiRDB_P/offline
Monitoring a resource HiRDB single server configuration /opt/VRTSvcs/bin/HiRDB_S/monitor
HiRDB parallel server configuration /opt/VRTSvcs/bin/HiRDB_P/monitor
(a) Online script

An online script explains the details of the processing to be performed when an agent brings a resource online. The following processing is required:

HiRDB single server configuration

The following example shows an online script for a HiRDB single server configuration:

 
#!/bin/sh
PATH=/sbin:/usr/bin:/usr/sbin:/etc:/bin:/opt/VRTSvcs/bin:"$2"/bin
export PATH
PDDIR="$2"
PDCONFPATH="$3"
LD_LIBRARY_PATH="$4"
export PDDIR PDCONFPATH LD_LIBRARY_PATH
$PDDIR/bin/pdstart
/bin/touch "$5"
/bin/chmod 0400 "$5"
 

HiRDB parallel server configuration

The following example shows an online script for a HiRDB parallel server configuration:

 
#!/bin/sh
PATH=/sbin:/usr/bin:/usr/sbin:/etc:/bin:/opt/VRTSvcs/bin:"$2"/bin
export PATH
PDDIR="$2"
PDCONFPATH="$3"
LD_LIBRARY_PATH="$4"
export PDDIR PDCONFPATH LD_LIBRARY_PATH
$PDDIR/bin/pdstart -q
/bin/touch "$5"
/bin/chmod 0400 "$5"
 

Note
The pdstart -q command starts the units in a HiRDB parallel server configuration during use of the system switchover facility.
(b) Offline script

An offline script explains the details of the processing to be performed when an agent takes a resource offline. The following processing is required:

HiRDB single server configuration

The following example shows an offline script for a HiRDB single server configuration:

 
#!/bin/sh
PATH=/sbin:/usr/bin:/usr/sbin:/etc:/bin:/opt/VRTSvcs/bin:"$2"/bin
export PATH
PDDIR="$2"
PDCONFPATH="$3"
LD_LIBRARY_PATH="$4"
export PDDIR PDCONFPATH LD_LIBRARY_PATH
$PDDIR/bin/pdstop -f -q
/bin/rm -f "$5"
 

Note
Specify the pdstop -f -q command to forcibly terminate HiRDB.
This offline script is executed when a system switchover occurs. When this script executes, HiRDB is forcibly terminated so that the systems can be switched immediately and the standby system can restart HiRDB in order to resume operations.

HiRDB parallel server configuration

The following example shows an offline script for a HiRDB parallel server configuration.

 
#!/bin/sh
PATH=/sbin:/usr/bin:/usr/sbin:/etc:/bin:/opt/VRTSvcs/bin:"$2"/bin
export PATH
PDDIR="$2"
PDCONFPATH="$3"
LD_LIBRARY_PATH="$4"
export PDDIR PDCONFPATH LD_LIBRARY_PATH
$PDDIR/bin/pdstop -z -q
/bin/rm -f "$5"
 

Note
Specify the pdstop -z -q command to forcibly terminate HiRDB.
This offline script is executed when a system switchover occurs. When this script executes, HiRDB is forcibly terminated so that the systems can be switched immediately and the standby system can restart HiRDB in order to resume operations.
(c) Monitor script

A monitor script explains the details of the processing to be performed when an agent monitors a resource (to check whether a resource is online). The following processing is required:

The following example shows a monitor script:

 
#!/bin/sh
if /bin/test -f "$5"
then
    exit 110
else
    exit 100
fi
 

The values of the environment variables and dummy file path names that are needed in each script are passed as arguments during script execution. For the arguments and return values that are passed to each script, see the VERITAS Cluster Server documentation.

(5) Creating an environment setup file

Create the environment setup file for VERITAS Cluster Server (/etc/VRTSvcs/conf/config/main.cf) and set up groups and resources.

(a) Resource attribute setting value

The table below lists the values to be specified for resource attributes. For details about the individual items, see the VERITAS Cluster Server documentation.

Table 26-15 Values to be specified for resource attributes

Resource Attribute Value to be specified
HiRDB_x type resource Critical Specify 0.
PdDir Specify the HiRDB directory name ($PDDIR).
PdConfPath Specify the name of the directory storing the HiRDB system definition file ($PDCONFPATH).
Ld_Library_Path Specify the name of the directory storing the HiRDB library ($LD_LIBRARY_PATH = $PDDIR/lib).
DummyFilePath Specify the dummy file name ($PDDIR/.pdveritas).
DiskGroup type resource DiskGroup Specify the name of the VERITAS Volume Manager's disk group to be used as a shared disk by HiRDB (monitor mode only).
IP type resource Device Specify the name of the NIC device related to the logical IP address to be used by HiRDB.
Address Specify the logical IP address to be used by HiRDB.
NIC type resource Device Specify the device name of the NIC connected to the network to be used by HiRDB.
NetworkHosts Specify the IP address of the host on the network, to be used by HiRDB. This attribute is not required.
(b) Environment setup file example (when IP addresses are inherited)
include "types.cf"
include "HiRDB_STypes.cf"
 
cluster vcs (
      UserNames = { vcsadm = cDi1yyJLOgPWY }
      CounterInterval = 5
      Factor = { runque = 5, memory = 1, disk = 10, cpu = 25, network = 5 }
      MaxFactor = { runque = 100, memory = 10, disk = 100, cpu = 100, network = 100 }
      )
 
system mainhost
 
system reservedhost
 
snmp vcs (
        TrapList = { 1 = "A new system has joined the VCS Cluster",
                 2 = "An existing system has changed its state",
                 3 = "A service group has changed its state",
                 4 = "One or more heartbeat links has gone down",
                 5 = "An HA service has done a manual restart",
                 6 = "An HA service has been manually idled",
                 7 = "An HA service has been successfully started" }
        )
 
group gr1 (
        SystemList = { mainhost, reservedhost }
        AutoStartList = { mainhost }
        )
 
        HiRDB_S gr1_HiRDB_S_UNT1 (
                Critical = 0
                PdDir = "/hirdb/pddir_s"
                PdConfPath = "/hirdb/pddir_s/conf"
                Ld_Library_Path = "/hirdb/pddir_s/lib"
                DummyFilePath = "/hirdb/pddir_s/.pdveritas"
                )
 
        DiskGroup gr1_DiskGroup_sharedg1 (
                DiskGroup = sharedg1
                )
 
        IP gr1_IP_logicalhost (
                Device = hme0
                Address = "172.16.161.177"
                )
 
        NIC gr1_NIC_hme0 (
                Device = hme0
                NetworkHosts = { "172.16.161.1" }
                )
 
        gr1_HiRDB_S_UNT1 requires gr1_DiskGroup_sharedg1
        gr1_DiskGroup_sharedg1 requires gr1_IP_logicalhost
        gr1_IP_logicalhost requires gr1_NIC_hme0
(c) Environment setup file example (when IP addresses are not inherited)
include "types.cf"
include "HiRDB_STypes.cf"
 
cluster vcs (
      UserNames = { vcsadm = cDi1yyJLOgPWY }
      CounterInterval = 5
      Factor = { runque = 5, memory = 1, disk = 10, cpu = 25, network = 5 }
      MaxFactor = { runque = 100, memory = 10, disk = 100, cpu = 100, network = 100 }
      )
 
system mainhost
 
system reservedhost
 
snmp vcs (
        TrapList = { 1 = "A new system has joined the VCS Cluster",
                 2 = "An existing system has changed its state",
                 3 = "A service group has changed its state",
                 4 = "One or more heartbeat links has gone down",
                 5 = "An HA service has done a manual restart",
                 6 = "An HA service has been manually idled",
                 7 = "An HA service has been successfully started" }
        )
 
group gr1 (
        SystemList = { mainhost, reservedhost }
        AutoStartList = { mainhost }
        )
 
        HiRDB_S gr1_HiRDB_S_UNT1 (
                Critical = 0
                PdDir = "/hirdb/pddir_s"
                PdConfPath = "/hirdb/pddir_s/conf"
                Ld_Library_Path = "/hirdb/pddir_s/lib"
                DummyFilePath = "/hirdb/pddir_s/.pdveritas"
                )
 
        DiskGroup gr1_DiskGroup_sharedg1 (
                DiskGroup = sharedg1
                )
 
        NIC gr1_NIC_hme0 (
                Device = hme0
                NetworkHosts = { "172.16.161.1" }
                )
 
        gr1_HiRDB_S_UNT1 requires gr1_DiskGroup_sharedg1
        gr1_DiskGroup_sharedg1 requires gr1_NIC_hme0