Hitachi

Job Management Partner 1 Version 10 Job Management Partner 1/Consolidated Management 2/Network Node Manager i Setup Guide


17.4.4 Configuring NNMi for HA (UNIX)

This subsection explains how to configure NNMi for HA in a UNIX environment.

In HA configuration for NNMi, you create a new resource group for NNMi. Therefore, you must begin the configuration procedure when there is no resource group to be configured.

The script (nnmhaconfigure.ovpl) used to configure NNMi for HA internally creates a resource group and individual resources for the cluster software. When the configuration procedure is completed, the following resource group has been configured.

Table 17‒9: Components of resource group for NNMi in HP Serviceguard

Item

File name

Package configuration file

/etc/cmcluster/resource-group/resource-group.conf

Package control script

/etc/cmcluster/resource-group/resource-group.cntl

Monitoring script

/etc/cmcluster/resource-group/resource-group.mon

In HPSG, nnmhaconfigure.ovpl places the set of configuration files in the above package in /etc/cmcluster/resource-group and performs the configuration procedure.

Table 17‒10: Components of resource group for NNMi in Veritas Cluster Server or Symantec Cluster Server

Resource name

Resource type

Description

resource-group-ip

IP

Controls virtual IP addresses.

resource-group-dg

DiskGroup

Controls disk groups.

resource-group-volume

Volume

Controls volumes.

resource-group-mount

Mount

Controls shared file systems.

resource-group-app

Application

Controls the start, stop, and monitoring of NNMi.

In VCS or SCS, nnmhaconfigure.ovpl configures the above resources by internally executing commands, such as hagrp and hares.

The following shows an example of configuration for each resource.

Example: Definition of VCS or SCS configuration file main.cf

Angle brackets (< >) enclose the setting values specified in nnmhaconfigure.ovpl.

group <resource_group> (
  SystemList = { <node1> = 1 , <node2> = 1}
  UserStrGlobal = "NNM_INTERFACE=<virtual_host>;HA_LOCALE=<LOCALE>;HA_MOUNT_POINT=<mountpoint>"
  )
 
  Application <resource_group>-app (
     StartProgram = "/opt/OV/misc/nnm/ha/nnmharg.ovpl NNM -start <resource_group>"
     StopProgram = "/opt/OV/misc/nnm/ha/nnmharg.ovpl NNM -stop <resource_group>"
     CleanProgram = "/opt/OV/misc/nnm/ha/nnmharg.ovpl NNM -clean <resource_group>"
     MonitorProgram = "/opt/OV/misc/nnm/ha/nnmharg.ovpl NNM -monitor
                       <resource_group> -return 1 0"
     OnlineTimeout = 1800
     )
  DiskGroup <resource_group>-dg (
     DiskGroup = <disk_group>
     )
  IP <resource_group>-ip (
     Device = <network_interface_of_virtual_host>
     Address = "10.208.228.159"
     NetMask = "255.255.255.0"
     )
  Mount <resource_group>-mount (
     MountPoint = "<mountpoint>"
     BlockDevice = "/dev/vx/dsk/<disk_group>/<volume_group>"
     FSType = <type_of_shared_file_systems>
     FsckOpt = "-y"
     )
  Volume <resource_group>-volume (
     Volume = <volume_group>
     DiskGroup = <disk_group>
     )
  <resource_group>-app requires <resource_group>-ip
  <resource_group>-app requires <resource_group>-mount
  <resource_group>-mount requires <resource_group>-volume
  <resource_group>-volume requires <resource_group>-dg
  <resource_group>-volume requires <resource_group>-ip
Table 17‒11: Components of resource group for NNMi in HA Monitor

Configuration item

Setting (Cm2 control script)

name (start)

/var/opt/OV/hacluster/resource-group/cm2_start.sh

termcommand (stop)

/var/opt/OV/hacluster/resource-group/cm2_stop.sh

patrolcommand (monitoring)

/var/opt/OV/hacluster/resource-group/cm2_monitor.sh

Important note

For HA Monitor, configure NNMi without using nnmhaconfigure.ovpl. For details about the procedure, see the Release Notes.

Organization of this subsection

(1) Configuring NNMi on the primary cluster node

Complete the procedure described below on the primary cluster node.

(a) Preparations

Start with the preparations:

  1. If you have not already done so, complete the procedure described in 17.2 Verifying the prerequisites to configuring NNMi for HA

  2. If you have not already done so, install NNMi, and then verify that NNMi is working correctly.

  3. Use the following command to back up all NNMi settings and data:

    Example:

    /opt/OV/bin/nnmbackup.ovpl -scope all -target directory

    For details about this command, see Chapter 18. NNMi Backup and Restore Tools.

    In the initial status of NNMi cluster environment configuration, the data in the primary cluster node must exactly match the data in the secondary cluster node. Therefore, restore the backup data obtained here during the secondary cluster node configuration procedure.

(b) Copying data to the shared disk

Next, copy data to the shared disk.

  1. Create the directory mount point for the shared disk.

    Verify that the shared disk directory mount point has been created with root as the user, sys as the group, and the permissions set to 555.

    Example:

    ls -l
  2. Provide a shared disk for the NNMi HA resource group.

    Important note

    Verify that the provided shared disk satisfies the following conditions:

    • It has already been formatted

    • It has enough free space

    • It is not being used by any other resource group

  3. Activate the shared disk and then mount.

    Example:

    • Using VxVM/VxFS for disk management in Solaris VCS or SCS

    vxdg import disk-group
    vxvol -g disk-group startall
    mount -F vxfs /dev/vx/dsk/disk-group/volume HA-mount-point
    • Using VxVM/VxFS for disk management in Linux VCS or SCS

    vxdg import disk-group
    vxvol -g disk-group startall
    mount -t vxfs /dev/vx/dsk/disk-group/volume HA-mount-point
    • HP-UX HP SG

    vgchange -c n volume-group
    vgchange -a y volume-group
    mount /dev/volume-group/logical-volume HA-mount-point
  4. Stop NNMi:

    /opt/OV/bin/ovstop -c
  5. Copy the NNMi files to the shared disk:

    /opt/OV/misc/nnm/ha/nnmhadisk.ovpl NNM -to HA-mount-point
    Important note

    The directory NNM is created immediately below the specified mount point (HA-mount-point/NNM).

    The storage directory cannot be renamed.

  6. Unmount the shared disk and deactivate it.

    Example:

    • Configuration using VCS or SCS and VxVM/VxFS

      umount HA-mount-point

      vxvol -g disk-group stopall

      vxdg deport disk-group

    • HP-UX HP SG

      umount HA-mount-point

      vgchange -a n volume-group

      vgchange -c y volume-group

    HP SG must be running during operation.

(c) Configuring NNMi for HA

Next, run NNMi's HA configuration.

  1. Create an NNMi HA resource group:

    /opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl NNM

    For details about the configuration items for this command, see 17.9.2 NNMi-provided HA configuration scripts.

    For the shared disk type, make sure that you specify disk, not none.

    Configuration example

    The HA configuration items are listed below in the order they are entered interactively to nnmhaconfigure.ovpl. Enter the appropriate values based on the information provided in Table 17-3 NNMi HA primary cluster node configuration information in 17.4.2 Configuring NNMi for HA.

    HA configuration item

    Example setting

    HA resource group name

    jp1ha1

    Virtual host name

    jp1ha1

    Network interface of the virtual host

    lan0

    Type of shared file system

    disk (make sure that you specify disk)

    Disk type

    vxfs

    Disk group (VCS or SCS only)

    shdg3

    Volume group

    vg03

    Logical volume (HP SG only)

    lvol1

    Directory to be mounted

    /shdsk1

    Important note

    Before you execute the configuration command, check the following notes:

    • NNMi in an HA configuration starts by using the nnmhaconfigure.ovpl runtime locale. Verify that the window used during the execution of nnmhaconfigure.ovpl is set to the correct locale (LANG environment variable):

      HP-UX HPSG: ja_JP.SJIS, ja_JP.eucJP, C, or zh_CN.hp15CN

      Solaris VCS or Solaris SCS: ja_JP.PCK, ja_JP.eucJP, C, or zh

      Linux VCS or Linux SCS: a_JP.UTF-8, C, or zh_CN.utf8

      To change the locale after HA configuration, see 17.6 Maintaining the HA Configuration.

    • If a value specified in nnmhaconfigure.ovpl is already in use by another resource group or resource, a resource creation error occurs. Before you execute nnmhaconfigure.ovpl, verify that the specified values are not already in use.

    • If a specified resource group name, IP address, or disk is already in use, the cluster software command executed to create resources results in an error. If an error occurs, nnmhaconfigure.ovpl terminates abnormally at that point, in which case the resource group and resources that had been created up to that point remain. Delete those remaining resources before you resolve the error and re-execute nnmhaconfigure.ovpl.

    • While nnmhaconfigure.ovpl is running, the message shown below might be displayed. This is a message for internal processing and no action is needed.

      The disk group was not found. Import will be attempted.

      Unable to perform the security token exchange with cmclconfd on node xxxxx

      Cannot connect to configuration daemon (cmclconfd) on node xxxxx

    Execution example

    The following shows an example screen display in which the example configuration values are specified, where each input item follows a question mark (?).

    • Example for HPSG (HP-UX)

    # /opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl NNM
    QUESTION: Enter the name of HA resource group:  ? jp1ha1
     
    A primary node configuration has been discovered.
     
    QUESTION: Enter a valid virtual host name:  ? jp1ha1
    Available network interface:
     
    Network subnet mask  Network interface
    none            lan3*
    192.168.69.0            lan2
    10.208.69.0             lan0
     
    Available value:
    1: lan3*
    2: lan2
    3: lan0
    QUESTION: Enter the type of shared file system:  ? 3
     
    Available value:
    1: disk
    2: none
    QUESTION: Enter the type of shared file system (disk, none):  ? 1
     
    Available value:
    1: vxfs
    QUESTION: Enter the name of disk type:  ? 1
    QUESTION: Enter the name of volume group:  ? vg03
     
    Available value:
    1: group
    2: lvol1
    3: rlvol1
    QUESTION: Enter the name of volume group:  ? 2
    QUESTION: Enter the directory to mount disk:  ? /shdsk1
    Creating a resource group.
    Completed the cluster update
    Configuring the HA value /var/opt/OV/shared/nnm/conf/ov.conf.
    Deleting the boot script.
    #
    • Example for VCS or SCS (Linux)

    # /opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl NNM
    QUESTION: Enter the name of HA resource group:  ? jp1ha1
     
    A primary node configuration has been discovered.
     
    QUESTION: Enter a valid virtual host name:  ? jp1ha1
    Information: Use of network interface information:
     
    Network interface: bond0
    Network subnet mask: 255.255.255.0
     
    Available value:
    1: disk
    2: none
    QUESTION: Enter the type of shared file system (disk, none):  ? 1
     
    Available value:
    1: vxfs
    2: ext2
    3: ext3
    QUESTION: Enter the name of disk type:  ? 1
    QUESTION: Enter the name of disk group:  ? shdg3
    The disk group was not found. Import will be attempted.
    QUESTION: Enter the name of volume group:  ? shvol3
    QUESTION: Enter the directory to mount disk:  ? /shdsk1
    Creating a resource group.
    VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
    Configuring the HA value /var/opt/OV/shared/nnm/conf/ov.conf.
    Deleting the boot script.
    Note: Updating NNMi FQDN to match the specified virtual host name. Configuring fqdn to jp1ha1.
     
    Configuring the domain to xxx.xxx.
     
    Generating a new SSL certificate.
     
    Generating a key store certificate of jp1ha1.selfsigned.
    [Completed successfully]
     
    Exporting the generated certificate to the trust store.
     
    The certificate has been saved in temporary.cert.
    The certificate has been added to the key store.
    #
  2. In VCS or SCS, enable the created resources (set Enabled to 1).

    Example:

    hares -modify resource-group-app Enabled 1

    hares -modify resource-group-dg Enabled 1

    hares -modify resource-group-ip Enabled 1

    hares -modify resource-group-mount Enabled 1

    hares -modify resource-group-volume Enabled 1

    Next, set the VCS or SCS settings to read-only, and then output the VCS or SCS configuration file main.cf:

    haconf -dump -makero

    The resources for monitoring network interfaces by VCS or SCS (such as NIC, MultiNICA, and MultiNICB of VCS or SCS) are not configured. If necessary, add configurations.

    Important note

    If you use HTTPS communications to access the NNMi server, you must configure the cluster to use an appropriate certificate. For details, see 8.5 Configuring a high availability cluster to use self-signed or Certificate Authority certificates.

(d) Verifying the startup

Lastly, verify the startup.

  1. Start the NNMi HA resource group.

    /opt/OV/misc/nnm/ha/nnmhastartrg.ovpl NNM resource-group

    This command returns the prompt without waiting for the start of the HA resource group. Use the command of HA cluster software to verify that the resource group has started.

    If NNMi does not start successfully, see 17.8 Troubleshooting the HA Configuration.

NNMi is now running under HA.

Important note

Do not use the ovstart and ovstop commands for normal NNMi operation in the HA configuration. Use these commands only when instructed to do so for HA maintenance purposes. To start and stop NNMi in the HA configuration, start or stop the HA resource group by using the cluster software.

(2) Configuring NNMi on the secondary cluster node

Complete the procedure described below on one secondary cluster node at a time.

(a) Preparations

Start with the preparations:

  1. If you have not already done so, complete the procedure described in 17.2 Verifying the prerequisites to configuring NNMi for HA.

  2. If you have not already done so, install NNMi, and then verify that NNMi is working correctly.

  3. Restore the backup data.

    Restore onto the secondary cluster node the backup data obtained in step 3 in subsection (a) in 17.4.4(1) Configuring NNMi on the primary cluster node.

    /opt/OV/bin/nnmrestore.ovpl -force -partial -source backup-data

    For details about this command, see Chapter 18. NNMi Backup and Restore Tools.

(b) Configuring NNMi for HA

Next, run NNMi's HA configuration.

  1. Create the mount point for the shared disk.

    You must use for this mount point the same name as used for the mount point created in step 1 in subsection (b) in 17.4.4(1) Configuring NNMi on the primary cluster node.

  2. Stop NNMi.

    /opt/OV/bin/ovstop -c
  3. Configure the NNMi HA resource group:

    /opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl NNM

    Specify the HA resource group name when prompted by the command.

    Execution example

    # /opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl NNM
     
    QUESTION: Enter the name of HA resource group:  ? jp1ha1
    A secondary node configuration has been discovered.
    Completed the cluster update
    Deleting the boot script.
    Note: Updating NNMi FQDN to match the specified virtual host name. Configuring fqdn to jp1ha1.xxx.xxx.
     
    Setting the domain to .xxx.xxx.
     
    Generating a new SSL certificate.
     
    #
  4. In VCS or SCS, apply configuration changes to the HA cluster:

    haconf -dump -makero
  5. Verify that configuration was successful:

    /opt/OV/misc/nnm/ha/nnmhaclusterinfo.ovpl -group resource-group -nodes

    This command outputs a list of all nodes that are in the specified HA resource group.

  6. Optionally, test the configuration by taking the HA resource group on the primary node offline and then bringing the HA resource group on the secondary node online.