aix6100-08 PowerHA6.1 Oracle11g R2 双机互备 move vg 磁盘心跳会down掉_MySQL, Oracle及数据库讨论区_Weblogic技术|Tuxedo技术|中间件技术|Oracle论坛|JAVA论坛|Linux/Unix技术|hadoop论坛_联动北方技术论坛  
网站首页 | 关于我们 | 服务中心 | 经验交流 | 公司荣誉 | 成功案例 | 合作伙伴 | 联系我们 |
联动北方-国内领先的云技术服务提供商
»  游客             当前位置:  论坛首页 »  自由讨论区 »  MySQL, Oracle及数据库讨论区 »
总帖数
1
每页帖数
101/1页1
返回列表
0
发起投票  发起投票 发新帖子
查看: 5360 | 回复: 0   主题: aix6100-08 PowerHA6.1 Oracle11g R2 双机互备 move vg 磁盘心跳会down掉        下一篇 
    本主题由 Administrator 于 2014-9-11 23:42:56 移动
white
注册用户
等级:少校
经验:1327
发帖:305
精华:0
注册:2011-7-21
状态:离线
发送短消息息给white 加好友    发送短消息息给white 发消息
发表于: IP:您无权察看 2014-9-11 9:27:41 | [全部帖] [楼主帖] 楼主

aix6100-08 PowerHA6.1 Oracle11g R2 双机互备 ISCSI存储 只做了磁盘心跳 无串口心跳 现在单独运行任何节点都无问题

但是双机都运行起来后 手工Move RG失败 两个节点都变成ERROR 并且目的机的磁盘心跳会无故DOWN 也就是A-》B B的磁盘心跳会down掉 B-》A A的磁盘心跳会当掉 贴上hacmp.out大家帮我看看 谢谢

对了解决方法就是让rg在手工上线到原主机 然后再clstop两个节点 否者down掉的磁盘心跳一直无法恢复

之前使用5.41hacmp也是这个问题 打了最新补丁的 再之前做了串口心跳 跟磁盘心跳 但是还是磁盘心跳会down掉 rg move失败

db01
Mar 24 19:21:58 EVENT START: rg_move_release ha_db01 1
:rg_move_release[+54] [[ high = high ]]
:rg_move_release[+54] version=1.6
:rg_move_release[+56] set -u
:rg_move_release[+58] [ 2 != 2 ]
:rg_move_release[+64] set +u
:rg_move_release[+66] clcallev rg_move ha_db01 1 RELEASE
Mar 24 19:21:58 EVENT START: rg_move ha_db01 1 RELEASE
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=0057D5EC4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db01
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db01 ]]
:get_local_nodename[+72] print ha_db01
:get_local_nodename[+73] exit 0
:rg_move[+71] version=1.49
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=ha_db01
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=RELEASE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] egrep group =
:rg_move[+104] awk {print $3}
:rg_move[+104] eval RGNAME="ha_oracle_rg"
:rg_move[+104] RGNAME=ha_oracle_rg
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_ha_oracle_rg_ha_db01
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_ha_oracle_rg_ha_db01
:rg_move[+118] print ONLINE
:rg_move[+118] export RG_MOVE_ONLINE=ONLINE
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=ONLINE
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] :rg_move[+136] clsetenvgrp ha_db01 rg_move ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db01 rg_move ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=NFS_ha_oracle_rg="TRANS"
NFSNODE_ha_oracle_rg="ha_db02"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="R"
ASSOCIATE_ACTIONS="MO"
AUXILLIARY_ACTIONS="N"
:rg_move[+137] RC=0
:rg_move[+138] eval NFS_ha_oracle_rg="TRANS"
NFSNODE_ha_oracle_rg="ha_db02"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="R"
ASSOCIATE_ACTIONS="MO"
AUXILLIARY_ACTIONS="N"
:rg_move[+138] NFS_ha_oracle_rg=TRANS
:rg_move[+138] NFSNODE_ha_oracle_rg=ha_db02
:rg_move[+138] FORCEDOWN_GROUPS=
:rg_move[+138] RESOURCE_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_GROUPS=
:rg_move[+138] HOMELESS_FOLLOWER_GROUPS=
:rg_move[+138] ERRSTATE_GROUPS=
:rg_move[+138] PRINCIPAL_ACTIONS=R
:rg_move[+138] ASSOCIATE_ACTIONS=MO
:rg_move[+138] AUXILLIARY_ACTIONS=N
:rg_move[+139] set +a
:rg_move[+143] [[ 0 -ne 0 ]]
:rg_move[+143] [[ -z ha_oracle_rg ]]
:rg_move[+152] [[ -z FALSE ]]
:rg_move[+198] set -a
:rg_move[+199] clsetenvres ha_oracle_rg rg_move
:rg_move[+199] eval PRINCIPAL_ACTION="RELEASE" ASSOCIATE_ACTION="MOUNT" AUXILLIARY_ACTION="NONE" VG_RR_ACTION="RELEASE" SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION="NONE" NFS_HOST=ha_db02 DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS="ha_app" FILESYSTEM="ALL" FORCED_VARYON="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" RECOVERY_METHOD="sequential" SERVICE_LABEL="ser" SSA_DISK_FENCING="false" VG_AUTO_IMPORT="false" VOLUME_GROUP="datavg"
:rg_move[+199] PRINCIPAL_ACTION=RELEASE ASSOCIATE_ACTION=MOUNT AUXILLIARY_ACTION=NONE VG_RR_ACTION=RELEASE SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION=NONE NFS_HOST=ha_db02 DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS=ha_app FILESYSTEM=ALL FORCED_VARYON=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false RECOVERY_METHOD=sequential SERVICE_LABEL=ser SSA_DISK_FENCING=false VG_AUTO_IMPORT=false VOLUME_GROUP=datavg
:rg_move[+200] set +a
:rg_move[+201] export GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move[+201] [[ RELEASE = ]]
+ha_oracle_rg:rg_move[+225] [ RELEASE = RELEASE ]
+ha_oracle_rg:rg_move[+227] [ RELEASE = RELEASE ]
+ha_oracle_rg:rg_move[+229] clcallev node_down_local
Mar 24 19:21:58 EVENT START: node_down_local
+ha_oracle_rg:node_down_local[154] [[ high == high ]]
+ha_oracle_rg:node_down_local[154] version=1.2.1.94
+ha_oracle_rg:node_down_local[156] NOT_RELEASE=''
+ha_oracle_rg:node_down_local[157] STATUS=0
+ha_oracle_rg:node_down_local[157] typeset -i STATUS
+ha_oracle_rg:node_down_local[159] EMULATE=REAL
+ha_oracle_rg:node_down_local[161] (( 0 !=0 ))
+ha_oracle_rg:node_down_local[169] set +u
+ha_oracle_rg:node_down_local[170] RESOURCES_CLEANUP=''
+ha_oracle_rg:node_down_local[171] set -u
+ha_oracle_rg:node_down_local[174] set +u
+ha_oracle_rg:node_down_local[175] OEM_FILESYSTEM=''
+ha_oracle_rg:node_down_local[176] OEM_VOLUME_GROUP=''
+ha_oracle_rg:node_down_local[179] eval echo '${RESGRP_ha_oracle_rg_ha_db01}'
+ha_oracle_rg:node_down_local[1] echo ONLINE
+ha_oracle_rg:node_down_local[179] read group_state
+ha_oracle_rg:node_down_local[183] [[ '' != CLEANUP ]]
+ha_oracle_rg:node_down_local[190] eval 'echo ${RESGRP_ha_oracle_rg_ha_db01}'
+ha_oracle_rg:node_down_local[1] echo ONLINE
+ha_oracle_rg:node_down_local[190] read state
+ha_oracle_rg:node_down_local[191] [[ ONLINE != TMP_ERROR ]]
+ha_oracle_rg:node_down_local[192] [[ ONLINE != ONLINE ]]
+ha_oracle_rg:node_down_local[203] set_resource_status RELEASING
+ha_oracle_rg:node_down_local[4] set +u
+ha_oracle_rg:node_down_local[5] NOT_DOIT=''
+ha_oracle_rg:node_down_local[6] set -u
+ha_oracle_rg:node_down_local[8] [[ '' == CLEANUP ]]
+ha_oracle_rg:node_down_local[12] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[14] [[ REAL == EMUL ]]
+ha_oracle_rg:node_down_local[19] clchdaemons -d clstrmgr_scripts -t resource_locator -n ha_db01 -o ha_oracle_rg -v RELEASING
+ha_oracle_rg:node_down_local[28] [[ RELEASING == RELEASING ]]
+ha_oracle_rg:node_down_local[30] [[ NONE == RELEASE_SECONDARY ]]
+ha_oracle_rg:node_down_local[31] [[ NONE == SECONDARY_BECOMES_PRIMARY ]]
+ha_oracle_rg:node_down_local[35] cl_RMupdate releasing ha_oracle_rg node_down_local
2013-03-24T19:21:58.694769
2013-03-24T19:21:58.707065
Reference string: Sun.Mar.24.19:21:58.BEIST.2013.node_down_local.ha_oracle_rg.ref
+ha_oracle_rg:node_down_local[213] [[ -z ONLINE ]]
+ha_oracle_rg:node_down_local[220] set -u
+ha_oracle_rg:node_down_local[225] [[ -n ha_app ]]
+ha_oracle_rg:node_down_local[232] TMPLIST=''
+ha_oracle_rg:node_down_local[233] [[ -n ha_app ]]
+ha_oracle_rg:node_down_local[234] print ha_app
+ha_oracle_rg:node_down_local[234] read first_one APPLICATIONS
+ha_oracle_rg:node_down_local[235] TMPLIST=ha_app
+ha_oracle_rg:node_down_local[233] [[ -n '' ]]
+ha_oracle_rg:node_down_local[238] APPLICATIONS=ha_app
+ha_oracle_rg:node_down_local[241] [[ REAL == EMUL ]]
+ha_oracle_rg:node_down_local[246] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[247] clcallev stop_server ha_app
Mar 24 19:21:58 EVENT START: stop_server ha_app
+ha_oracle_rg:stop_server[+48] [[ high = high ]]
+ha_oracle_rg:stop_server[+48] version=1.4.1.13
+ha_oracle_rg:stop_server[+49] +ha_oracle_rg:stop_server[+49] cl_get_path
HA_DIR=es
+ha_oracle_rg:stop_server[+51] STATUS=0
+ha_oracle_rg:stop_server[+55] [ ! -n ]
+ha_oracle_rg:stop_server[+57] EMULATE=REAL
+ha_oracle_rg:stop_server[+60] PROC_RES=false
+ha_oracle_rg:stop_server[+64] [[ 0 != 0 ]]
+ha_oracle_rg:stop_server[+68] typeset WPARNAME WPARDIR EXEC
+ha_oracle_rg:stop_server[+69] WPARDIR=
+ha_oracle_rg:stop_server[+70] EXEC=
+ha_oracle_rg:stop_server[+72] +ha_oracle_rg:stop_server[+72] clwparname ha_oracle_rg
+ha_oracle_rg:clwparname[35] [[ high == high ]]
+ha_oracle_rg:clwparname[35] version=1.3
+ha_oracle_rg:clwparname[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+ha_oracle_rg:clwparname[+20] ERRNO=0
+ha_oracle_rg:clwparname[+22] [[ high == high ]]
+ha_oracle_rg:clwparname[+22] set -x
+ha_oracle_rg:clwparname[+23] [[ high == high ]]
+ha_oracle_rg:clwparname[+23] version=1.10
+ha_oracle_rg:clwparname[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+ha_oracle_rg:clwparname[+20] [[ high == high ]]
+ha_oracle_rg:clwparname[+20] set -x
+ha_oracle_rg:clwparname[+21] [[ high == high ]]
+ha_oracle_rg:clwparname[+21] version=1.4
+ha_oracle_rg:clwparname[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+ha_oracle_rg:clwparname[+24] export PATH
+ha_oracle_rg:clwparname[+26] typeset usageErr invalArgErr internalErr
+ha_oracle_rg:clwparname[+28] usageErr=10
+ha_oracle_rg:clwparname[+29] invalArgErr=11
+ha_oracle_rg:clwparname[+30] internalErr=12
+ha_oracle_rg:clwparname[+39] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[+42] uname
+ha_oracle_rg:clwparname[+42] OSNAME=AIX
+ha_oracle_rg:clwparname[+51] [[ AIX == *AIX* ]]
+ha_oracle_rg:clwparname[+54] lslpp -l bos.wpars
+ha_oracle_rg:clwparname[+54] 1> /dev/null 2>& 1
+ha_oracle_rg:clwparname[+56] loadWparName ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+5] usage='Usage: loadWparName <resource group name>'
+ha_oracle_rg:clwparname[loadWparName+5] typeset -r usage
+ha_oracle_rg:clwparname[loadWparName+6] typeset rgName wparName wparDir rc
+ha_oracle_rg:clwparname[loadWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clwparname[loadWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clwparname[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clwparname[loadWparName+22] [[ -f /var/hacmp/adm/wpar/ha_oracle_rg ]]
+ha_oracle_rg:clwparname[loadWparName+23] cat /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+23] wparName=''
+ha_oracle_rg:clwparname[loadWparName+24] [[ -n '' ]]
+ha_oracle_rg:clwparname[loadWparName+36] return 0
+ha_oracle_rg:clwparname[+56] wparName=''
+ha_oracle_rg:clwparname[+57] rc=0
+ha_oracle_rg:clwparname[+58] (( 0 != 0 ))
+ha_oracle_rg:clwparname[+64] printf %s
+ha_oracle_rg:clwparname[+65] exit 0
WPARNAME=
+ha_oracle_rg:stop_server[+74] set -u
+ha_oracle_rg:stop_server[+77] ALLSERVERS=All_servers
+ha_oracle_rg:stop_server[+78] [ REAL = EMUL ]
+ha_oracle_rg:stop_server[+83] cl_RMupdate resource_releasing All_servers stop_server
2013-03-24T19:21:58.906192
2013-03-24T19:21:58.918509
Reference string: Sun.Mar.24.19:21:58.BEIST.2013.stop_server.All_servers.ha_oracle_rg.ref
+ha_oracle_rg:stop_server[+88] [[ -n ]]
+ha_oracle_rg:stop_server[+107] +ha_oracle_rg:stop_server[+107] cllsserv -cn ha_app
+ha_oracle_rg:stop_server[+107] cut -d: -f3
STOP=/usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+108] +ha_oracle_rg:stop_server[+108] echo /usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+108] cut -d -f1
STOP_SCRIPT=/usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+110] PATTERN=ha_db01 ha_app
+ha_oracle_rg:stop_server[+110] [[ -n ]]
+ha_oracle_rg:stop_server[+110] [[ -z ]]
+ha_oracle_rg:stop_server[+110] [[ -x /usr/hascript/oracle_stop ]]
+ha_oracle_rg:stop_server[+120] [ REAL = EMUL ]
+ha_oracle_rg:stop_server[+125] /usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+125] ODMDIR=/etc/objrepos
db01:The ORACLE Server is stopping,Please Waiting.
SQL*Plus: Release 11.2.0.1.0 Production on Sun Mar 24 19:21:59 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> Connected.
SQL> Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.1.0 - Production on 24-MAR-2013 19:22:16
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ser)(PORT=1521)))
The command completed successfully
db01:The ORACLE Server is stoped.
+ha_oracle_rg:stop_server[+127] [ 0 -ne 0 ]
+ha_oracle_rg:stop_server[+155] ALLNOERRSERV=All_nonerror_servers
+ha_oracle_rg:stop_server[+156] [ REAL = EMUL ]
+ha_oracle_rg:stop_server[+161] cl_RMupdate resource_down All_nonerror_servers stop_server
2013-03-24T19:22:20.350049
2013-03-24T19:22:20.362503
Reference string: Sun.Mar.24.19:22:20.BEIST.2013.stop_server.All_nonerror_servers.ha_oracle_rg.ref
+ha_oracle_rg:stop_server[+164] exit 0
Mar 24 19:22:20 EVENT COMPLETED: stop_server ha_app 0
+ha_oracle_rg:node_down_local[258] server_release_lpar_resources ha_app
+ha_oracle_rg:server_release_lpar_resources[721] [[ high == high ]]
+ha_oracle_rg:server_release_lpar_resources[721] version=1.14.1.15
+ha_oracle_rg:server_release_lpar_resources[723] typeset HOSTNAME
+ha_oracle_rg:server_release_lpar_resources[724] typeset MANAGED_SYSTEM
+ha_oracle_rg:server_release_lpar_resources[725] typeset HMC_IP
+ha_oracle_rg:server_release_lpar_resources[726] added_apps=''
+ha_oracle_rg:server_release_lpar_resources[726] typeset added_apps
+ha_oracle_rg:server_release_lpar_resources[727] APPLICATIONS=''
+ha_oracle_rg:server_release_lpar_resources[727] typeset APPLICATIONS
+ha_oracle_rg:server_release_lpar_resources[728] mem_release_type=''
+ha_oracle_rg:server_release_lpar_resources[728] typeset mem_release_type
+ha_oracle_rg:server_release_lpar_resources[730] mem_resource=0
+ha_oracle_rg:server_release_lpar_resources[730] typeset mem_resource
+ha_oracle_rg:server_release_lpar_resources[731] cpu_resource=0
+ha_oracle_rg:server_release_lpar_resources[731] typeset cpu_resource
+ha_oracle_rg:server_release_lpar_resources[732] cuod_mem_resource=0
+ha_oracle_rg:server_release_lpar_resources[732] typeset cuod_mem_resource
+ha_oracle_rg:server_release_lpar_resources[733] cuod_cpu_resource=0
+ha_oracle_rg:server_release_lpar_resources[733] typeset cuod_cpu_resource
+ha_oracle_rg:server_release_lpar_resources[735] display_event_summary=false
+ha_oracle_rg:server_release_lpar_resources[735] typeset display_event_summary
+ha_oracle_rg:server_release_lpar_resources[737] lmb_size=0
+ha_oracle_rg:server_release_lpar_resources[737] typeset lmb_size
+ha_oracle_rg:server_release_lpar_resources[739] typeset -i check_cuod
+ha_oracle_rg:server_release_lpar_resources[740] RC=0
+ha_oracle_rg:server_release_lpar_resources[740] typeset -i RC
+ha_oracle_rg:server_release_lpar_resources[744] : Look for any added application servers, beyond those running at the moment
+ha_oracle_rg:server_release_lpar_resources[746] getopts :g: opt
+ha_oracle_rg:server_release_lpar_resources[754] shift 0
+ha_oracle_rg:server_release_lpar_resources[756] APPLICATIONS=ha_app
+ha_oracle_rg:server_release_lpar_resources[759] : Set up values we are going to need to talk to the HMC, if they have not
+ha_oracle_rg:server_release_lpar_resources[760] : been set up before.
+ha_oracle_rg:server_release_lpar_resources[762] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[763] hostname
+ha_oracle_rg:server_release_lpar_resources[763] HOSTNAME=db01
+ha_oracle_rg:server_release_lpar_resources[766] [[ -z ha_db01 ]]
+ha_oracle_rg:server_release_lpar_resources[770] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[771] clodmget -q 'name = ha_db01 and object = MANAGED_SYSTEM' -f value -n HACMPnode
+ha_oracle_rg:server_release_lpar_resources[771] MANAGED_SYSTEM=''
+ha_oracle_rg:server_release_lpar_resources[774] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[775] clodmget -q name='ha_db01 and object=HMC_IP' -f value -n HACMPnode
+ha_oracle_rg:server_release_lpar_resources[775] HMC_IP=''
+ha_oracle_rg:server_release_lpar_resources[778] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[780] : Node is not defined as an LPAR node if there is no HMC to talk to
+ha_oracle_rg:server_release_lpar_resources[782] exit 0
+ha_oracle_rg:node_down_local[259] : exit status of server_release_lpar_resources ha_app is: 0
+ha_oracle_rg:node_down_local[265] [[ -n '' ]]
+ha_oracle_rg:node_down_local[284] [[ -n '' ]]
+ha_oracle_rg:node_down_local[303] [[ -n '' ]]
+ha_oracle_rg:node_down_local[325] [[ -n '' ]]
+ha_oracle_rg:node_down_local[344] CROSSMOUNT=0
+ha_oracle_rg:node_down_local[344] typeset -i CROSSMOUNT
+ha_oracle_rg:node_down_local[345] export CROSSMOUNT
+ha_oracle_rg:node_down_local[347] [[ -n '' ]]
+ha_oracle_rg:node_down_local[387] (( 0 == 0 ))
+ha_oracle_rg:node_down_local[393] grep 'name ='
+ha_oracle_rg:node_down_local[393] sort
+ha_oracle_rg:node_down_local[393] uniq
+ha_oracle_rg:node_down_local[393] odmget HACMPnode
+ha_oracle_rg:node_down_local[393] wc -l
+ha_oracle_rg:node_down_local[393] (( 2 == 2 ))
+ha_oracle_rg:node_down_local[395] grep 'group ='
+ha_oracle_rg:node_down_local[395] cut -f2 '-d"'
+ha_oracle_rg:node_down_local[395] odmget HACMPgroup
+ha_oracle_rg:node_down_local[395] RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:node_down_local[400] grep 'value ='
+ha_oracle_rg:node_down_local[400] cut -f2 '-d"'
+ha_oracle_rg:node_down_local[399] odmget -q group='ha_oracle_rg AND name=EXPORT_FILESYSTEM' HACMPresource
+ha_oracle_rg:node_down_local[399] EXPORTLIST=''
+ha_oracle_rg:node_down_local[400] [[ -n '' ]]
+ha_oracle_rg:node_down_local[423] [[ false == true ]]
+ha_oracle_rg:node_down_local[432] [[ -n '' ]]
+ha_oracle_rg:node_down_local[443] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[444] clcallev release_vg_fs ALL datavg '' ''
Mar 24 19:22:20 EVENT START: release_vg_fs ALL datavg
+ha_oracle_rg:release_vg_fs[+64] [[ high = high ]]
+ha_oracle_rg:release_vg_fs[+64] version=1.4.1.55
+ha_oracle_rg:release_vg_fs[+66] STATUS=0
+ha_oracle_rg:release_vg_fs[+66] typeset -i DEF_VARYON_ACTION=0
+ha_oracle_rg:release_vg_fs[+70] [[ RELEASE != RELEASE ]]
+ha_oracle_rg:release_vg_fs[+76] FILE_SYSTEMS=ALL
+ha_oracle_rg:release_vg_fs[+77] VOLUME_GROUPS=datavg
+ha_oracle_rg:release_vg_fs[+78] OEM_FILE_SYSTEMS=
+ha_oracle_rg:release_vg_fs[+79] OEM_VOLUME_GROUPS=
+ha_oracle_rg:release_vg_fs[+80] VG_MOD=false
+ha_oracle_rg:release_vg_fs[+81] SELECTIVE_FAILOVER=false
+ha_oracle_rg:release_vg_fs[+81] typeset -i DEF_VARYOFF_ACTION=0
+ha_oracle_rg:release_vg_fs[+89] [[ ALL = ALL ]]
+ha_oracle_rg:release_vg_fs[+91] FILE_SYSTEMS=
+ha_oracle_rg:release_vg_fs[+91] [[ -z datavg ]]
+ha_oracle_rg:release_vg_fs[+91] [[ -n datavg ]]
+ha_oracle_rg:release_vg_fs[+103] +ha_oracle_rg:release_vg_fs[+103] rdsort datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] echo datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] sed -e s/\ /\
/g
+ha_oracle_rg:release_vg_fs[rdsort+5] sort -ru
VOLUME_GROUPS=datavg
+ha_oracle_rg:release_vg_fs[+103] [[ true = true ]]
+ha_oracle_rg:release_vg_fs[+103] [[ ONLINE != ONLINE ]]
+ha_oracle_rg:release_vg_fs[+113] date
Sun Mar 24 19:22:20 BEIST 2013
+ha_oracle_rg:release_vg_fs[+115] lsvg -L -o
+ha_oracle_rg:release_vg_fs[+115] grep -x datavg
datavg
+ha_oracle_rg:release_vg_fs[+118] +ha_oracle_rg:release_vg_fs[+118] odmget -q name = datavg CuDep
+ha_oracle_rg:release_vg_fs[+118] awk $1 == "dependency" { gsub(/"/, "", $3); print $3 }
OPEN_LVs=oradatalv
loglv00
+ha_oracle_rg:release_vg_fs[+122] +ha_oracle_rg:release_vg_fs[+122] odmget -q name = oradatalv and attribute = label CuAt
+ha_oracle_rg:release_vg_fs[+122] awk $1 == "value" && $3 ~ /^\"\//{ gsub(/"/,"",$3); print $3 }
FS=/oradata
+ha_oracle_rg:release_vg_fs[+122] [[ -n /oradata ]]
+ha_oracle_rg:release_vg_fs[+124] FILE_SYSTEMS= /oradata
+ha_oracle_rg:release_vg_fs[+122] +ha_oracle_rg:release_vg_fs[+122] odmget -q name = loglv00 and attribute = label CuAt
+ha_oracle_rg:release_vg_fs[+122] awk $1 == "value" && $3 ~ /^\"\//{ gsub(/"/,"",$3); print $3 }
FS=
+ha_oracle_rg:release_vg_fs[+122] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+130] date
Sun Mar 24 19:22:20 BEIST 2013
+ha_oracle_rg:release_vg_fs[+130] [[ false = true ]]
+ha_oracle_rg:release_vg_fs[+142] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+177] +ha_oracle_rg:release_vg_fs[+177] cl_fs2disk -v /oradata
vg=datavg
+ha_oracle_rg:release_vg_fs[+177] [[ = reconfig* ]]
+ha_oracle_rg:release_vg_fs[+195] VOLUME_GROUPS=datavg datavg
+ha_oracle_rg:release_vg_fs[+204] [[ -n /oradata ]]
+ha_oracle_rg:release_vg_fs[+207] +ha_oracle_rg:release_vg_fs[+207] rdsort /oradata
+ha_oracle_rg:release_vg_fs[rdsort+4] echo /oradata
+ha_oracle_rg:release_vg_fs[rdsort+4] sed -e s/\ /\
/g
+ha_oracle_rg:release_vg_fs[rdsort+5] sort -ru
FILE_SYSTEMS=/oradata
+ha_oracle_rg:release_vg_fs[+210] cl_deactivate_fs /oradata
+ha_oracle_rg:cl_deactivate_fs[369] [[ high == high ]]
+ha_oracle_rg:cl_deactivate_fs[369] version=1.1.4.37
+ha_oracle_rg:cl_deactivate_fs[371] STATUS=0
+ha_oracle_rg:cl_deactivate_fs[372] SLEEP=2
+ha_oracle_rg:cl_deactivate_fs[372] typeset -i SLEEP
+ha_oracle_rg:cl_deactivate_fs[373] LIMIT=60
+ha_oracle_rg:cl_deactivate_fs[373] typeset -i LIMIT
+ha_oracle_rg:cl_deactivate_fs[374] export SLEEP
+ha_oracle_rg:cl_deactivate_fs[375] export LIMIT
+ha_oracle_rg:cl_deactivate_fs[376] TMP_FILENAME=_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs[378] [[ 1 != 0 ]]
+ha_oracle_rg:cl_deactivate_fs[378] [[ /oradata == -c ]]
+ha_oracle_rg:cl_deactivate_fs[383] OEM_CALL=false
+ha_oracle_rg:cl_deactivate_fs[387] : Check here to see if the forced unmount option can be used
+ha_oracle_rg:cl_deactivate_fs[389] FORCE_OK=''
+ha_oracle_rg:cl_deactivate_fs[390] export FORCE_OK
+ha_oracle_rg:cl_deactivate_fs[392] : Each of the V, R, M and F fields are padded to fixed length,
+ha_oracle_rg:cl_deactivate_fs[393] : to allow reliable comparisons. E.g., maximum VRMF is
+ha_oracle_rg:cl_deactivate_fs[394] : 99.99.999.999
+ha_oracle_rg:cl_deactivate_fs[396] typeset -i V R M F
+ha_oracle_rg:cl_deactivate_fs[397] typeset -Z2 V
+ha_oracle_rg:cl_deactivate_fs[398] typeset -Z2 R
+ha_oracle_rg:cl_deactivate_fs[399] typeset -Z3 M
+ha_oracle_rg:cl_deactivate_fs[400] typeset -Z3 F
+ha_oracle_rg:cl_deactivate_fs[401] jfs2_lvl=0601002000
+ha_oracle_rg:cl_deactivate_fs[401] typeset -i jfs2_lvl
+ha_oracle_rg:cl_deactivate_fs[402] VRMF=0
+ha_oracle_rg:cl_deactivate_fs[402] typeset -i VRMF
+ha_oracle_rg:cl_deactivate_fs[405] : Here try and figure out what level of JFS2 is installed
+ha_oracle_rg:cl_deactivate_fs[407] lslpp -lcqOr bos.rte.filesystem
+ha_oracle_rg:cl_deactivate_fs[407] cut -f3 -d:
+ha_oracle_rg:cl_deactivate_fs[407] read V R M F
+ha_oracle_rg:cl_deactivate_fs[407] IFS=.
+ha_oracle_rg:cl_deactivate_fs[408] VRMF=0601008015
+ha_oracle_rg:cl_deactivate_fs[410] (( 601008015 >= 601002000 ))
+ha_oracle_rg:cl_deactivate_fs[412] FORCE_OK=true
+ha_oracle_rg:cl_deactivate_fs[416] : if JOB_TYPE is set, and it does not equal to GROUP, then
+ha_oracle_rg:cl_deactivate_fs[417] : we are processing for process_resources
+ha_oracle_rg:cl_deactivate_fs[419] [[ 0 != 0 ]]
+ha_oracle_rg:cl_deactivate_fs[424] (( 1 == 0 ))
+ha_oracle_rg:cl_deactivate_fs[431] : At this point, we have an explicit list of filesystems to unmount
+ha_oracle_rg:cl_deactivate_fs[435] : Getting the resource group name from the environment
+ha_oracle_rg:cl_deactivate_fs[437] RES_GRP=ha_oracle_rg
+ha_oracle_rg:cl_deactivate_fs[438] TMP_FILENAME=ha_oracle_rg_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs[441] : Remove the status file if already exists
+ha_oracle_rg:cl_deactivate_fs[443] rm -f /tmp/ha_oracle_rg_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs[446] : if RECOVERY_METHOD is null get from ODM
+ha_oracle_rg:cl_deactivate_fs[448] [[ -z sequential ]]
+ha_oracle_rg:cl_deactivate_fs[454] print sequential
+ha_oracle_rg:cl_deactivate_fs[454] sed 's/^ //'
+ha_oracle_rg:cl_deactivate_fs[454] RECOVERY_METHOD=sequential
+ha_oracle_rg:cl_deactivate_fs[455] print sequential
+ha_oracle_rg:cl_deactivate_fs[455] sed 's/ $//'
+ha_oracle_rg:cl_deactivate_fs[455] RECOVERY_METHOD=sequential
+ha_oracle_rg:cl_deactivate_fs[456] [[ sequential != sequential ]]
+ha_oracle_rg:cl_deactivate_fs[463] set -u
+ha_oracle_rg:cl_deactivate_fs[466] : Are there any 'exports?'
+ha_oracle_rg:cl_deactivate_fs[468] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_fs[468] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_fs[477] : Reverse the order of the file systems list, to unmount in the opposite
+ha_oracle_rg:cl_deactivate_fs[478] : order from mounting. Important for nested mounts.
+ha_oracle_rg:cl_deactivate_fs[480] print /oradata
+ha_oracle_rg:cl_deactivate_fs[480] tr ' ' '\n'
+ha_oracle_rg:cl_deactivate_fs[480] /bin/sort -r
+ha_oracle_rg:cl_deactivate_fs[480] FILELIST=/oradata
+ha_oracle_rg:cl_deactivate_fs[483] : update resource manager - file systems being released
+ha_oracle_rg:cl_deactivate_fs[485] ALLFS=All_filesystems
+ha_oracle_rg:cl_deactivate_fs[486] cl_RMupdate resource_releasing All_filesystems cl_deactivate_fs
2013-03-24T19:22:20.967402
2013-03-24T19:22:20.979301
Reference string: Sun.Mar.24.19:22:20.BEIST.2013.cl_deactivate_fs.All_filesystems.ha_oracle_rg.ref
+ha_oracle_rg:cl_deactivate_fs[487] pid_list=''
+ha_oracle_rg:cl_deactivate_fs[489] [[ false == true ]]
+ha_oracle_rg:cl_deactivate_fs[495] [[ sequential == parallel ]]
+ha_oracle_rg:cl_deactivate_fs[500] fs_umount /oradata cl_deactivate_fs ha_oracle_rg_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs(.150)[fs_umount+5] FS=/oradata
+ha_oracle_rg:cl_deactivate_fs(.150)[fs_umount+5] typeset FS
+ha_oracle_rg:cl_deactivate_fs(.150)[fs_umount+6] PROGNAME=cl_deactivate_fs
+ha_oracle_rg:cl_deactivate_fs(.150)[fs_umount+6] typeset PROGNAME
+ha_oracle_rg:cl_deactivate_fs(.150)[fs_umount+7] TMP_FILENAME=ha_oracle_rg_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs(.150)[fs_umount+7] typeset TMP_FILENAME
+ha_oracle_rg:cl_deactivate_fs(.150)[fs_umount+8] clwparroot ha_oracle_rg
+ha_oracle_rg:clwparroot[35] [[ high == high ]]
+ha_oracle_rg:clwparroot[35] version=1.1
+ha_oracle_rg:clwparroot[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+ha_oracle_rg:clwparroot[+20] ERRNO=0
+ha_oracle_rg:clwparroot[+22] [[ high == high ]]
+ha_oracle_rg:clwparroot[+22] set -x
+ha_oracle_rg:clwparroot[+23] [[ high == high ]]
+ha_oracle_rg:clwparroot[+23] version=1.10
+ha_oracle_rg:clwparroot[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+ha_oracle_rg:clwparroot[+20] [[ high == high ]]
+ha_oracle_rg:clwparroot[+20] set -x
+ha_oracle_rg:clwparroot[+21] [[ high == high ]]
+ha_oracle_rg:clwparroot[+21] version=1.4
+ha_oracle_rg:clwparroot[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+ha_oracle_rg:clwparroot[+24] export PATH
+ha_oracle_rg:clwparroot[+26] typeset usageErr invalArgErr internalErr
+ha_oracle_rg:clwparroot[+28] usageErr=10
+ha_oracle_rg:clwparroot[+29] invalArgErr=11
+ha_oracle_rg:clwparroot[+30] internalErr=12
+ha_oracle_rg:clwparroot[+39] rgName=ha_oracle_rg
+ha_oracle_rg:clwparroot[+42] uname
+ha_oracle_rg:clwparroot[+42] OSNAME=AIX
+ha_oracle_rg:clwparroot[+44] [[ AIX == *AIX* ]]
+ha_oracle_rg:clwparroot[+45] lslpp -l bos.wpars
+ha_oracle_rg:clwparroot[+45] 1> /dev/null 2>& 1
+ha_oracle_rg:clwparroot[+47] loadWparName ha_oracle_rg
+ha_oracle_rg:clwparroot[loadWparName+5] usage='Usage: loadWparName <resource group name>'
+ha_oracle_rg:clwparroot[loadWparName+5] typeset -r usage
+ha_oracle_rg:clwparroot[loadWparName+6] typeset rgName wparName wparDir rc
+ha_oracle_rg:clwparroot[loadWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clwparroot[loadWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clwparroot[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clwparroot[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clwparroot[loadWparName+22] [[ -f /var/hacmp/adm/wpar/ha_oracle_rg ]]
+ha_oracle_rg:clwparroot[loadWparName+23] cat /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clwparroot[loadWparName+23] wparName=''
+ha_oracle_rg:clwparroot[loadWparName+24] [[ -n '' ]]
+ha_oracle_rg:clwparroot[loadWparName+36] return 0
+ha_oracle_rg:clwparroot[+47] wparName=''
+ha_oracle_rg:clwparroot[+48] [[ -z '' ]]
+ha_oracle_rg:clwparroot[+48] exit 0
+ha_oracle_rg:cl_deactivate_fs(.220)[fs_umount+8] WPAR_ROOT=''
+ha_oracle_rg:cl_deactivate_fs(.220)[fs_umount+8] typeset WPAR_ROOT
+ha_oracle_rg:cl_deactivate_fs(.220)[fs_umount+9] STATUS=0
+ha_oracle_rg:cl_deactivate_fs(.220)[fs_umount+9] typeset STATUS
+ha_oracle_rg:cl_deactivate_fs(.220)[fs_umount+10] typeset lv
+ha_oracle_rg:cl_deactivate_fs(.220)[fs_umount+11] typeset lv_lsfs
+ha_oracle_rg:cl_deactivate_fs(.220)[fs_umount+14] : Get the logical volume associated with the filesystem
+ha_oracle_rg:cl_deactivate_fs(.230)[fs_umount+16] lsfs -c /oradata
+ha_oracle_rg:cl_deactivate_fs(.240)[fs_umount+16] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oradata:/dev/oradatalv:jfs2:::19922944:rw:no:no'
+ha_oracle_rg:cl_deactivate_fs(.240)[fs_umount+28] : Get the logical volume name and filesystem type
+ha_oracle_rg:cl_deactivate_fs(.250)[fs_umount+30] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oradata:/dev/oradatalv:jfs2:::19922944:rw:no:no'
+ha_oracle_rg:cl_deactivate_fs(.250)[fs_umount+30] tail -1
+ha_oracle_rg:cl_deactivate_fs(.250)[fs_umount+30] cut -d: -f2
+ha_oracle_rg:cl_deactivate_fs(.250)[fs_umount+30] lv=/dev/oradatalv
+ha_oracle_rg:cl_deactivate_fs(.260)[fs_umount+31] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oradata:/dev/oradatalv:jfs2:::19922944:rw:no:no'
+ha_oracle_rg:cl_deactivate_fs(.260)[fs_umount+31] tail -1
+ha_oracle_rg:cl_deactivate_fs(.270)[fs_umount+31] cut -d: -f3
+ha_oracle_rg:cl_deactivate_fs(.270)[fs_umount+31] fs_type=jfs2
+ha_oracle_rg:cl_deactivate_fs(.270)[fs_umount+34] : For WPARs, find the real file system name
+ha_oracle_rg:cl_deactivate_fs(.270)[fs_umount+36] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_fs(.270)[fs_umount+39] : Check to see if filesystem is mounted.
+ha_oracle_rg:cl_deactivate_fs(.270)[fs_umount+41] mount
+ha_oracle_rg:cl_deactivate_fs(.280)[fs_umount+41] grep -qw /dev/oradatalv
+ha_oracle_rg:cl_deactivate_fs(.280)[fs_umount+43] (( count=0))
+ha_oracle_rg:cl_deactivate_fs(.280)[fs_umount+43] (( count <= 60))
+ha_oracle_rg:cl_deactivate_fs(.280)[fs_umount+46] : Try to unmount the file system
+ha_oracle_rg:cl_deactivate_fs(.280)[fs_umount+47] date '+%h %d %H:%M:%S.000'
+ha_oracle_rg:cl_deactivate_fs(.290)[fs_umount+47] : Attempt 1 of 61 to unmount at Mar 24 19:22:21.000
+ha_oracle_rg:cl_deactivate_fs(.290)[fs_umount+49] umount /oradata
+ha_oracle_rg:cl_deactivate_fs(.470)[fs_umount+52] : Unmount of /oradata worked.
+ha_oracle_rg:cl_deactivate_fs(.470)[fs_umount+54] break
+ha_oracle_rg:cl_deactivate_fs(.470)[fs_umount+131] echo 0
+ha_oracle_rg:cl_deactivate_fs(.470)[fs_umount+131] 1>> /tmp/ha_oracle_rg_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs(.480)[fs_umount+132] return 0
+ha_oracle_rg:cl_deactivate_fs[506] : wait to sync all the processes.
+ha_oracle_rg:cl_deactivate_fs[508] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_fs[514] : update resource manager - file systems released
+ha_oracle_rg:cl_deactivate_fs[516] ALLNOERROR=All_non_error_filesystems
+ha_oracle_rg:cl_deactivate_fs[517] cl_RMupdate resource_down All_non_error_filesystems cl_deactivate_fs
2013-03-24T19:22:21.358207
2013-03-24T19:22:21.371473
Reference string: Sun.Mar.24.19:22:21.BEIST.2013.cl_deactivate_fs.All_non_error_filesystems.ha_oracle_rg.ref
+ha_oracle_rg:cl_deactivate_fs[519] [[ -f /tmp/ha_oracle_rg_deactivate_fs.tmp ]]
+ha_oracle_rg:cl_deactivate_fs[521] grep -q 1 /tmp/ha_oracle_rg_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs[526] rm -f /tmp/ha_oracle_rg_deactivate_fs.tmp
+ha_oracle_rg:cl_deactivate_fs[529] exit 0
+ha_oracle_rg:release_vg_fs[+222] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+240] +ha_oracle_rg:release_vg_fs[+240] cl_rrmethods2call prevg_offline
+ha_oracle_rg:cl_rrmethods2call[+49] [[ high = high ]]
+ha_oracle_rg:cl_rrmethods2call[+49] version=1.17
+ha_oracle_rg:cl_rrmethods2call[+50] +ha_oracle_rg:cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_rrmethods2call[+76] RRMETHODS=
+ha_oracle_rg:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
+ha_oracle_rg:cl_rrmethods2call[+114] [[ no = yes ]]
+ha_oracle_rg:cl_rrmethods2call[+120] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+125] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+130] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+135] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+140] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+145] echo
+ha_oracle_rg:cl_rrmethods2call[+146] exit 0
METHODS=
+ha_oracle_rg:release_vg_fs[+241] SKIPVARYOFF=0
+ha_oracle_rg:release_vg_fs[+265] [ RELEASE = RELEASE ]
+ha_oracle_rg:release_vg_fs[+270] SKIPVARYOFF=0
+ha_oracle_rg:release_vg_fs[+272] (( 0 == 1 ))
+ha_oracle_rg:release_vg_fs[+288] [[ -n datavg datavg ]]
+ha_oracle_rg:release_vg_fs[+291] +ha_oracle_rg:release_vg_fs[+291] rdsort datavg datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] echo datavg datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] sed -e s/\ /\
/g
+ha_oracle_rg:release_vg_fs[rdsort+5] sort -ru
VOLUME_GROUPS=datavg
+ha_oracle_rg:release_vg_fs[+291] [[ 0 -eq 0 ]]
+ha_oracle_rg:release_vg_fs[+295] cl_deactivate_vgs datavg
+ha_oracle_rg:cl_deactivate_vgs[440] [[ high == high ]]
+ha_oracle_rg:cl_deactivate_vgs[440] version=1.1.11.2
+ha_oracle_rg:cl_deactivate_vgs[441] cl_get_path
+ha_oracle_rg:cl_deactivate_vgs[441] HA_DIR=es
+ha_oracle_rg:cl_deactivate_vgs[443] STATUS=0
+ha_oracle_rg:cl_deactivate_vgs[443] typeset -i STATUS
+ha_oracle_rg:cl_deactivate_vgs[444] TMP_VARYOFF_STATUS=/tmp/_deactivate_vgs.tmp
+ha_oracle_rg:cl_deactivate_vgs[445] sddsrv_off=FALSE
+ha_oracle_rg:cl_deactivate_vgs[446] ALLVGS=All_volume_groups
+ha_oracle_rg:cl_deactivate_vgs[447] OEM_CALL=false
+ha_oracle_rg:cl_deactivate_vgs[449] (( 1 != 0 ))
+ha_oracle_rg:cl_deactivate_vgs[449] [[ datavg == -c ]]
+ha_oracle_rg:cl_deactivate_vgs[458] EVENT_TYPE=not_set
+ha_oracle_rg:cl_deactivate_vgs[459] EVENT_TYPE=not_set
+ha_oracle_rg:cl_deactivate_vgs[462] : if JOB_TYPE is set, and it does not equal to GROUP, then
+ha_oracle_rg:cl_deactivate_vgs[463] : we are processing for process_resources
+ha_oracle_rg:cl_deactivate_vgs[465] [[ 0 != 0 ]]
+ha_oracle_rg:cl_deactivate_vgs[469] : Otherwise, check for valid call
+ha_oracle_rg:cl_deactivate_vgs[471] PROC_RES=false
+ha_oracle_rg:cl_deactivate_vgs[472] (( 1 == 0 ))
+ha_oracle_rg:cl_deactivate_vgs[480] : set -u will report an error if any variable used in the script is not set
+ha_oracle_rg:cl_deactivate_vgs[482] set -u
+ha_oracle_rg:cl_deactivate_vgs[485] : Remove the status file if it currently exists
+ha_oracle_rg:cl_deactivate_vgs[487] rm -f /tmp/_deactivate_vgs.tmp
+ha_oracle_rg:cl_deactivate_vgs[490] : Each of the V, R, M and F fields are padded to fixed length,
+ha_oracle_rg:cl_deactivate_vgs[491] : to allow reliable comparisons. E.g., maximum VRMF is
+ha_oracle_rg:cl_deactivate_vgs[492] : 99.99.999.999
+ha_oracle_rg:cl_deactivate_vgs[494] typeset -i V R M F
+ha_oracle_rg:cl_deactivate_vgs[495] typeset -Z2 V
+ha_oracle_rg:cl_deactivate_vgs[496] typeset -Z2 R
+ha_oracle_rg:cl_deactivate_vgs[497] typeset -Z3 M
+ha_oracle_rg:cl_deactivate_vgs[498] typeset -Z3 F
+ha_oracle_rg:cl_deactivate_vgs[499] VRMF=0
+ha_oracle_rg:cl_deactivate_vgs[499] typeset -i VRMF
+ha_oracle_rg:cl_deactivate_vgs[502] : If the sddsrv daemon is running - vpath dead path detection and
+ha_oracle_rg:cl_deactivate_vgs[503] : recovery - turn it off, since interactions with the fibre channel
+ha_oracle_rg:cl_deactivate_vgs[504] : device driver will, in the case where there actually is a dead path,
+ha_oracle_rg:cl_deactivate_vgs[505] : slow down every vpath operation.
+ha_oracle_rg:cl_deactivate_vgs[507] ls '/dev/vpath*'
+ha_oracle_rg:cl_deactivate_vgs[507] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_deactivate_vgs[569] : Setup for the hats_adapter calls
+ha_oracle_rg:cl_deactivate_vgs[571] cldomain
+ha_oracle_rg:cl_deactivate_vgs[571] HA_DOMAIN_NAME=ha_oracle
+ha_oracle_rg:cl_deactivate_vgs[571] export HA_DOMAIN_NAME
+ha_oracle_rg:cl_deactivate_vgs[572] HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+ha_oracle_rg:cl_deactivate_vgs[572] export HB_SERVER_SOCKET
+ha_oracle_rg:cl_deactivate_vgs[575] : Special processing is required for 2 node clusters. determine the number
+ha_oracle_rg:cl_deactivate_vgs[576] : of nodes and AIX level
+ha_oracle_rg:cl_deactivate_vgs[578] TWO_NODE_CLUSTER=FALSE
+ha_oracle_rg:cl_deactivate_vgs[578] export TWO_NODE_CLUSTER
+ha_oracle_rg:cl_deactivate_vgs[579] FS_TYPES=''
+ha_oracle_rg:cl_deactivate_vgs[579] export FS_TYPES
+ha_oracle_rg:cl_deactivate_vgs[580] grep 'name ='
+ha_oracle_rg:cl_deactivate_vgs[580] sort
+ha_oracle_rg:cl_deactivate_vgs[580] uniq
+ha_oracle_rg:cl_deactivate_vgs[580] wc -l
+ha_oracle_rg:cl_deactivate_vgs[580] odmget HACMPnode
+ha_oracle_rg:cl_deactivate_vgs[580] (( 2 == 2 ))
+ha_oracle_rg:cl_deactivate_vgs[581] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_vgs[621] : Pick up a list of currently varyd on volume groups
+ha_oracle_rg:cl_deactivate_vgs[623] lsvg -L -o
+ha_oracle_rg:cl_deactivate_vgs[623] 2> /tmp/lsvg.err
+ha_oracle_rg:cl_deactivate_vgs[623] VG_ON_LIST=$'datavg\nrootvg'
+ha_oracle_rg:cl_deactivate_vgs[626] : if we are not called from process_resources, we have the old-style
+ha_oracle_rg:cl_deactivate_vgs[627] : environment and parameters
+ha_oracle_rg:cl_deactivate_vgs[629] [[ false == false ]]
+ha_oracle_rg:cl_deactivate_vgs[631] : Update the Resource Manager - let it know that were varying off these
+ha_oracle_rg:cl_deactivate_vgs[632] : volume groups
+ha_oracle_rg:cl_deactivate_vgs[634] cl_RMupdate resource_releasing All_volume_groups cl_deactivate_vgs
2013-03-24T19:22:21.639251
2013-03-24T19:22:21.651239
Reference string: Sun.Mar.24.19:22:21.BEIST.2013.cl_deactivate_vgs.All_volume_groups.ha_oracle_rg.ref
+ha_oracle_rg:cl_deactivate_vgs[637] : First process any mndhb for these volume groups
+ha_oracle_rg:cl_deactivate_vgs[639] vgs_process_mndhb datavg
+ha_oracle_rg:cl_deactivate_vgs[65] [[ high == high ]]
+ha_oracle_rg:cl_deactivate_vgs[65] set -x
+ha_oracle_rg:cl_deactivate_vgs[67] VG_LIST=datavg
+ha_oracle_rg:cl_deactivate_vgs[67] typeset VG_LIST
+ha_oracle_rg:cl_deactivate_vgs[68] typeset lv_list
+ha_oracle_rg:cl_deactivate_vgs[69] typeset lv_base
+ha_oracle_rg:cl_deactivate_vgs[71] STATUS=0
+ha_oracle_rg:cl_deactivate_vgs[71] typeset -i STATUS
+ha_oracle_rg:cl_deactivate_vgs[72] RC=0
+ha_oracle_rg:cl_deactivate_vgs[72] typeset -i RC
+ha_oracle_rg:cl_deactivate_vgs[73] rc_hats_adapter=0
+ha_oracle_rg:cl_deactivate_vgs[73] typeset -i rc_hats_adapter
+ha_oracle_rg:cl_deactivate_vgs[78] : If this vg contains lvs that are part of a mndhb network, tell
+ha_oracle_rg:cl_deactivate_vgs[79] : topsvcs to stop monitoring the network.
+ha_oracle_rg:cl_deactivate_vgs[80] : Note that we use clrsctinfo/cllsif because it will do the raw device
+ha_oracle_rg:cl_deactivate_vgs[81] : name mapping for us.
+ha_oracle_rg:cl_deactivate_vgs[83] grep :datavg:
+ha_oracle_rg:cl_deactivate_vgs[83] cut -f 7 -d:
+ha_oracle_rg:cl_deactivate_vgs[83] sort -u
+ha_oracle_rg:cl_deactivate_vgs[83] clrsctinfo -p cllsif -c
+ha_oracle_rg:cl_deactivate_vgs[83] lv_list=''
+ha_oracle_rg:cl_deactivate_vgs[109] : if there were any calls to hats_adapter give topsvcs a bit to catch up
+ha_oracle_rg:cl_deactivate_vgs[111] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_vgs[112] return 0
+ha_oracle_rg:cl_deactivate_vgs[641] PS4_LOOP=''
+ha_oracle_rg:cl_deactivate_vgs[641] typeset PS4_LOOP
+ha_oracle_rg:cl_deactivate_vgs[643] : Now, process the list of volume groups passed in
+ha_oracle_rg:cl_deactivate_vgs:datavg[647] PS4_LOOP=datavg
+ha_oracle_rg:cl_deactivate_vgs:datavg[649] : Find out if it is varied on
+ha_oracle_rg:cl_deactivate_vgs:datavg[651] [[ false == false ]]
+ha_oracle_rg:cl_deactivate_vgs:datavg[654] : Dealing with AIX LVM volume groups
+ha_oracle_rg:cl_deactivate_vgs:datavg[656] print datavg rootvg
+ha_oracle_rg:cl_deactivate_vgs:datavg[656] grep -qw datavg
+ha_oracle_rg:cl_deactivate_vgs:datavg[663] MODE=9999
+ha_oracle_rg:cl_deactivate_vgs:datavg[664] /usr/sbin/getlvodm -v datavg
+ha_oracle_rg:cl_deactivate_vgs:datavg[664] VGID=0057d5ec00004c000000013d962e6972
+ha_oracle_rg:cl_deactivate_vgs:datavg[665] lqueryvg -g 0057d5ec00004c000000013d962e6972 -X
+ha_oracle_rg:cl_deactivate_vgs:datavg[665] MODE=0
+ha_oracle_rg:cl_deactivate_vgs:datavg[666] RC=0
+ha_oracle_rg:cl_deactivate_vgs:datavg[667] (( 0 != 0 ))
+ha_oracle_rg:cl_deactivate_vgs:datavg[668] : exit status of lqueryvg -g 0057d5ec00004c000000013d962e6972 -X: 0
+ha_oracle_rg:cl_deactivate_vgs:datavg[671] : Yes, it is varyd on, so go vary it off
+ha_oracle_rg:cl_deactivate_vgs:datavg[673] vgs_varyoff datavg 0
+ha_oracle_rg:cl_deactivate_vgs:datavg[132] PS4_FUNC=vgs_varyoff
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[132] PS4_TIMER=true
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+132] ERRNO=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+132] typeset PS4_FUNC PS4_TIMER ERRNO
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+133] [[ high == high ]]
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+133] set -x
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+135] VG=datavg
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+135] typeset VG
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+136] MODE=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+136] typeset MODE
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+138] typeset OPEN_FSs
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+139] typeset OPEN_LVs
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+140] typeset TMP_VG_LIST
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+142] STATUS=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+142] typeset -i STATUS
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+143] RC=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+143] typeset -i RC
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+144] SELECTIVE_FAILOVER=false
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+144] typeset SELECTIVE_FAILOVER
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+145] typeset LV
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+146] rc_fuser=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+146] typeset -i rc_fuser
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+147] rc_varyonvg=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+147] typeset -i rc_varyonvg
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+148] rc_varyoffvg=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+148] typeset -i rc_varyoffvg
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+149] rc_hats_adapter=0
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+149] typeset -i rc_hats_adapter
+ha_oracle_rg:cl_deactivate_vgs(.440):datavg[vgs_varyoff+152] : Check to see if this is a DARE event, whilst we have open file systems
+ha_oracle_rg:cl_deactivate_vgs:datavg[688] unset PS4_LOOP
+ha_oracle_rg:cl_deactivate_vgs[778] : Wait to sync all the background instances of vgs_varyoff
+ha_oracle_rg:cl_deactivate_vgs[780] wait
+ha_oracle_rg:cl_deactivate_vgs(.450):datavg[vgs_varyoff+154] lsvg -l -L datavg
+ha_oracle_rg:cl_deactivate_vgs(.450):datavg[vgs_varyoff+154] LC_ALL=C
+ha_oracle_rg:cl_deactivate_vgs(.610):datavg[vgs_varyoff+154] TMP_VG_LIST=$'datavg:\nLV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT\noradatalv jfs2 38 38 1 closed/syncd /oradata\nloglv00 jfs2log 1 1 1 closed/syncd N/A'
+ha_oracle_rg:cl_deactivate_vgs(.610):datavg[vgs_varyoff+156] [[ not_set == reconfig* ]]
+ha_oracle_rg:cl_deactivate_vgs(.610):datavg[vgs_varyoff+174] : Get list of open logical volumes corresponding to file systems
+ha_oracle_rg:cl_deactivate_vgs(.620):datavg[vgs_varyoff+176] print $'datavg:\nLV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT\noradatalv jfs2 38 38 1 closed/syncd /oradata\nloglv00 jfs2log 1 1 1 closed/syncd N/A'
+ha_oracle_rg:cl_deactivate_vgs(.620):datavg[vgs_varyoff+176] awk '$2 ~ /jfs2?$/ && $6 ~ /open/ {print $1}'
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+176] OPEN_LVs=''
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+179] : If this is an rg_move on selective fallover, lsvg -l might not work
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+180] : so try looking up the LVs in the ODM if the VG is online
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+182] [[ -z '' ]]
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+183] [[ true == true ]]
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+184] [[ ONLINE != ONLINE ]]
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+200] : Attempt to kill off any processes using the logical volume, so that
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+201] : varyoff will hopefully work. Varyoff is guaranteed to fail if there
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+202] : are open connections to any logical volume.
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+204] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+219] : For two-node clusters, special processing for the highly available NFS
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+220] : server function: tell NFS to dump the dup cache into the jfslog or
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+221] : jfs2log, if the level of AIX supports it to allow it to be picked up
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+222] : by the next node to get this volume group.
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+224] [[ FALSE == TRUE ]]
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+275] : Finally, vary off the volume group
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+277] [[ 32 == 0 ]]
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+302] cltime
2013-03-24T19:22:22.096017
+ha_oracle_rg:cl_deactivate_vgs(.630):datavg[vgs_varyoff+303] varyoffvg datavg
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+304] rc_varyoffvg=0
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+305] : rc_varyoffvg=0
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+306] cltime
2013-03-24T19:22:22.633120
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+310] : Check the result of the varyoff
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+312] (( 0 != 0 ))
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+388] : Append status to the status file. Append is used because there may be
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+389] : many instances of this subroutine appending status, as volume groups
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+390] : are processed in parallel.
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+392] echo datavg 0
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+392] 1>> /tmp/_deactivate_vgs.tmp
+ha_oracle_rg:cl_deactivate_vgs(1.170):datavg[vgs_varyoff+393] return 0
+ha_oracle_rg:cl_deactivate_vgs[783] : Collect any failure indications from backgrounded varyoff processing
+ha_oracle_rg:cl_deactivate_vgs[785] [[ -f /tmp/_deactivate_vgs.tmp ]]
+ha_oracle_rg:cl_deactivate_vgs[788] : Check to see if any failures were noted. A status of 1 indicates
+ha_oracle_rg:cl_deactivate_vgs[789] : that there was a problem with varyoff.
+ha_oracle_rg:cl_deactivate_vgs[791] cat /tmp/_deactivate_vgs.tmp
+ha_oracle_rg:cl_deactivate_vgs[791] read VGNAME VARYOFF_STATUS
+ha_oracle_rg:cl_deactivate_vgs[793] [[ 0 == 1 ]]
+ha_oracle_rg:cl_deactivate_vgs[791] read VGNAME VARYOFF_STATUS
+ha_oracle_rg:cl_deactivate_vgs[804] rm -f /tmp/_deactivate_vgs.tmp
+ha_oracle_rg:cl_deactivate_vgs[808] : Update Resource Manager - tell it that all the ones that were not reported
+ha_oracle_rg:cl_deactivate_vgs[809] : as failures, worked.
+ha_oracle_rg:cl_deactivate_vgs[811] ALLNOERRVGS=All_nonerror_volume_groups
+ha_oracle_rg:cl_deactivate_vgs[812] [[ false == false ]]
+ha_oracle_rg:cl_deactivate_vgs[813] cl_RMupdate resource_down All_nonerror_volume_groups cl_deactivate_vgs
2013-03-24T19:22:22.693258
2013-03-24T19:22:22.705206
Reference string: Sun.Mar.24.19:22:22.BEIST.2013.cl_deactivate_vgs.All_nonerror_volume_groups.ha_oracle_rg.ref
+ha_oracle_rg:cl_deactivate_vgs[820] [[ FALSE == TRUE ]]
+ha_oracle_rg:cl_deactivate_vgs[828] exit 0
+ha_oracle_rg:release_vg_fs[+338] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+352] +ha_oracle_rg:release_vg_fs[+352] cl_rrmethods2call postvg_offline
+ha_oracle_rg:cl_rrmethods2call[+49] [[ high = high ]]
+ha_oracle_rg:cl_rrmethods2call[+49] version=1.17
+ha_oracle_rg:cl_rrmethods2call[+50] +ha_oracle_rg:cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_rrmethods2call[+76] RRMETHODS=
+ha_oracle_rg:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
+ha_oracle_rg:cl_rrmethods2call[+114] [[ no = yes ]]
+ha_oracle_rg:cl_rrmethods2call[+120] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+125] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+130] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+135] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+140] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+145] echo
+ha_oracle_rg:cl_rrmethods2call[+146] exit 0
METHODS=
+ha_oracle_rg:release_vg_fs[+365] exit 0
Mar 24 19:22:22 EVENT COMPLETED: release_vg_fs ALL datavg 0
+ha_oracle_rg:node_down_local[453] [[ -n '' ]]
+ha_oracle_rg:node_down_local[475] [[ R == RELEASE ]]
+ha_oracle_rg:node_down_local[491] [[ false != true ]]
+ha_oracle_rg:node_down_local[493] release_addr
+ha_oracle_rg:node_down_local[8] [[ -n '' ]]
+ha_oracle_rg:node_down_local[22] [[ -n ser ]]
+ha_oracle_rg:node_down_local[24] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[25] clcallev release_service_addr ser
Mar 24 19:22:22 EVENT START: release_service_addr ser
+ha_oracle_rg:release_service_addr[+120] [[ high = high ]]
+ha_oracle_rg:release_service_addr[+120] version=1.40
+ha_oracle_rg:release_service_addr[+121] +ha_oracle_rg:release_service_addr[+121] cl_get_path
HA_DIR=es
+ha_oracle_rg:release_service_addr[+122] +ha_oracle_rg:release_service_addr[+122] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:release_service_addr[+124] STATUS=0
+ha_oracle_rg:release_service_addr[+127] [ ! -n ]
+ha_oracle_rg:release_service_addr[+129] EMULATE=REAL
+ha_oracle_rg:release_service_addr[+132] PROC_RES=false
+ha_oracle_rg:release_service_addr[+136] [[ 0 != 0 ]]
+ha_oracle_rg:release_service_addr[+141] [ 1 -eq 0 ]
+ha_oracle_rg:release_service_addr[+146] export RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:release_service_addr[+162] saveNSORDER=UNDEFINED
+ha_oracle_rg:release_service_addr[+163] NSORDER=local
+ha_oracle_rg:release_service_addr[+163] export NSORDER
+ha_oracle_rg:release_service_addr[+170] export GROUPNAME
+ha_oracle_rg:release_service_addr[+170] [[ false = true ]]
+ha_oracle_rg:release_service_addr[+180] SERVICELABELS=ser
+ha_oracle_rg:release_service_addr[+184] ALLSRVADDRS=All_service_addrs
+ha_oracle_rg:release_service_addr[+184] [[ REAL = EMUL ]]
+ha_oracle_rg:release_service_addr[+190] cl_RMupdate resource_releasing All_service_addrs release_service_addr
2013-03-24T19:22:22.936603
2013-03-24T19:22:22.948479
Reference string: Sun.Mar.24.19:22:22.BEIST.2013.release_service_addr.All_service_addrs.ha_oracle_rg.ref
+ha_oracle_rg:release_service_addr[+200] clgetif -a ser
+ha_oracle_rg:release_service_addr[+200] LC_ALL=C
en1
+ha_oracle_rg:release_service_addr[+201] return_code=0
+ha_oracle_rg:release_service_addr[+202] [ 0 -ne 0 ]
+ha_oracle_rg:release_service_addr[+229] +ha_oracle_rg:release_service_addr[+229] name_to_addr ser
textual_addr=192.168.4.13
+ha_oracle_rg:release_service_addr[+230] +ha_oracle_rg:release_service_addr[+230] clgetif -a 192.168.4.13
+ha_oracle_rg:release_service_addr[+230] LC_ALL=C
INTERFACE=en1
+ha_oracle_rg:release_service_addr[+231] [ en1 = ]
+ha_oracle_rg:release_service_addr[+258] +ha_oracle_rg:release_service_addr[+258] clgetif -n 192.168.4.13
+ha_oracle_rg:release_service_addr[+258] LC_ALL=C
NETMASK=255.255.255.0
+ha_oracle_rg:release_service_addr[+260] +ha_oracle_rg:release_service_addr[+260] cllsif -J ~
+ha_oracle_rg:release_service_addr[+260] grep -wF 192.168.4.13
+ha_oracle_rg:release_service_addr[+260] cut -d~ -f3
+ha_oracle_rg:release_service_addr[+260] sort -u
NETWORK=net_ether_01
+ha_oracle_rg:release_service_addr[+267] +ha_oracle_rg:release_service_addr[+267] cllsif -J ~ -Si ha_db01
+ha_oracle_rg:release_service_addr[+267] grep ~boot~
+ha_oracle_rg:release_service_addr[+267] cut -d~ -f3,7
+ha_oracle_rg:release_service_addr[+267] grep ^net_ether_01~
+ha_oracle_rg:release_service_addr[+267] cut -d~ -f2
+ha_oracle_rg:release_service_addr[+267] tail -1
BOOT=192.168.2.11
+ha_oracle_rg:release_service_addr[+269] [ -z 192.168.2.11 ]
+ha_oracle_rg:release_service_addr[+299] SNA_LAN_LINKS=
+ha_oracle_rg:release_service_addr[+308] SNA_CONNECTIONS=
+ha_oracle_rg:release_service_addr[+308] [[ -n ]]
+ha_oracle_rg:release_service_addr[+323] [ -n en1 ]
+ha_oracle_rg:release_service_addr[+325] [ REAL = EMUL ]
+ha_oracle_rg:release_service_addr[+330] +ha_oracle_rg:release_service_addr[+330] get_inet_family 192.168.4.13
+ha_oracle_rg:release_service_addr[get_inet_family+3] ip_label=192.168.4.13
+ha_oracle_rg:release_service_addr[get_inet_family+4] +ha_oracle_rg:release_service_addr[get_inet_family+4] cllsif -J ~ -Sn 192.168.4.13
+ha_oracle_rg:release_service_addr[get_inet_family+4] awk -F~ {print $15}
inet_family=AF_INET
+ha_oracle_rg:release_service_addr[get_inet_family+7] echo inet
+ha_oracle_rg:release_service_addr[get_inet_family+8] return
INET_FAMILY=inet
+ha_oracle_rg:release_service_addr[+330] [[ inet = inet6 ]]
+ha_oracle_rg:release_service_addr[+336] cl_swap_IP_address rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[+1115] [[ high = high ]]
+ha_oracle_rg:cl_swap_IP_address[+1115] version=1.9.1.110
+ha_oracle_rg:cl_swap_IP_address[+1116] +ha_oracle_rg:cl_swap_IP_address[+1116] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_swap_IP_address[+1117] +ha_oracle_rg:cl_swap_IP_address[+1117] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:cl_swap_IP_address[+1118] export LC_ALL=C
+ha_oracle_rg:cl_swap_IP_address[+1119] RESTORE_ROUTES=/usr/es/sbin/cluster/.restore_routes
+ha_oracle_rg:cl_swap_IP_address[+1123] cldomain
+ha_oracle_rg:cl_swap_IP_address[+1123] export HA_DOMAIN_NAME=ha_oracle
+ha_oracle_rg:cl_swap_IP_address[+1124] export HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+ha_oracle_rg:cl_swap_IP_address[+1125] BINDIR=/usr/sbin/rsct/bin
+ha_oracle_rg:cl_swap_IP_address[+1128] +ha_oracle_rg:cl_swap_IP_address[+1128] clmixver
MIXVER=11
+ha_oracle_rg:cl_swap_IP_address[+1129] MIXVERRC=0
+ha_oracle_rg:cl_swap_IP_address[+1131] cl_echo 33 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 24 2013 19:22:23
Mar 24 2013 19:22:23 +ha_oracle_rg:cl_echo[+96] MSG_ID=33
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 33 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_echo[+98] 1>& 2
Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_swap_IP_address[+1132] date
Sun Mar 24 19:22:23 BEIST 2013
+ha_oracle_rg:cl_swap_IP_address[+1138] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1138] [[ 6 -eq 6 ]]
+ha_oracle_rg:cl_swap_IP_address[+1140] non_alias 192.168.2.11
+ha_oracle_rg:cl_swap_IP_address[+1140] [[ 0 -eq 1 ]]
+ha_oracle_rg:cl_swap_IP_address[+1148] set_ipignoreredirects
Setting ipignoreredirects to 1
+ha_oracle_rg:cl_swap_IP_address[+1151] PROC_RES=false
+ha_oracle_rg:cl_swap_IP_address[+1155] [[ 0 != 0 ]]
+ha_oracle_rg:cl_swap_IP_address[+1159] set -u
+ha_oracle_rg:cl_swap_IP_address[+1161] ATM_IF_TYPE=
+ha_oracle_rg:cl_swap_IP_address[+1162] STANDBY_ROUTES=
+ha_oracle_rg:cl_swap_IP_address[+1163] NEED_ROUTE_PRESERVATION=0
+ha_oracle_rg:cl_swap_IP_address[+1170] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en1 1500 link#2 0.9.6b.2e.1c.44 194803 0 101844 4 0
en1 1500 192.168.2 192.168.2.11 194803 0 101844 4 0
en1 1500 192.168.4 192.168.4.13 194803 0 101844 4 0
en2 1500 link#3 0.11.25.bf.a8.6b 8904 0 5795 3 0
en2 1500 192.168.3 192.168.3.11 8904 0 5795 3 0
en2 1500 192.168.4 192.168.4.11 8904 0 5795 3 0
lo0 16896 link#1 33406 0 33406 0 0
lo0 16896 127 127.0.0.1 33406 0 33406 0 0
lo0 16896 ::1%1 33406 0 33406 0 0
+ha_oracle_rg:cl_swap_IP_address[+1171] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
127/8 127.0.0.1 U 1 - lo0 0 0
192.168.2.0 192.168.2.11 UHSb 1 - en1 0 0 =>
192.168.2/24 192.168.2.11 U 1 - en1 0 0
192.168.2.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.2.255 192.168.2.11 UHSb 1 - en1 0 0
192.168.3.0 192.168.3.11 UHSb 1 - en2 0 0 =>
192.168.3/24 192.168.3.11 U 1 - en2 0 0
192.168.3.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.3.255 192.168.3.11 UHSb 1 - en2 0 0
192.168.4.0 192.168.4.11 UHSb 1 WRR en2 0 0 =>
192.168.4.0 192.168.4.13 UHSb 1 -"- en1 0 0 =>
192.168.4/24 192.168.4.11 U 1 WRR en2 0 0 =>
192.168.4/24 192.168.4.13 U 1 -"- en1 0 0
192.168.4.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.13 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.255 192.168.4.11 UHSb 1 WRR en2 0 0 =>
192.168.4.255 192.168.4.13 UHSb 1 -"- en1 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+ha_oracle_rg:cl_swap_IP_address[+1172] CASC_OR_ROT=rotating
+ha_oracle_rg:cl_swap_IP_address[+1173] ACQ_OR_RLSE=release
+ha_oracle_rg:cl_swap_IP_address[+1174] IF=en1
+ha_oracle_rg:cl_swap_IP_address[+1175] ADDR=192.168.2.11
+ha_oracle_rg:cl_swap_IP_address[+1176] OLD_ADDR=192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[+1177] NETMASK=255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[+1178] +ha_oracle_rg:cl_swap_IP_address[+1178] get_MTU en1
+ha_oracle_rg:cl_swap_IP_address[get_MTU+5] IF=en1
+ha_oracle_rg:cl_swap_IP_address[get_MTU+7] MTUSIZE=
+ha_oracle_rg:cl_swap_IP_address[get_MTU+7] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[get_MTU+9] +ha_oracle_rg:cl_swap_IP_address[get_MTU+9] awk NR == 2 {print $2}
+ha_oracle_rg:cl_swap_IP_address[get_MTU+9] netstat -nI en1
MTUSIZE=1500
+ha_oracle_rg:cl_swap_IP_address[get_MTU+15] print 1500
MTUSIZE=1500
+ha_oracle_rg:cl_swap_IP_address[+1178] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1178] [[ rotating = cascading ]]
+ha_oracle_rg:cl_swap_IP_address[+1197] +ha_oracle_rg:cl_swap_IP_address[+1197] check_ATM_interface en1
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+5] if_name=en1
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+12] awk /Device Type: ATM LAN Emulation/ {print "atmle_ent";exit}
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+12] entstat -d en1
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+12] 2>& 1
ATM_IF_TYPE=
+ha_oracle_rg:cl_swap_IP_address[+1197] [[ = atm ]]
+ha_oracle_rg:cl_swap_IP_address[+1211] +ha_oracle_rg:cl_swap_IP_address[+1211] cut -f3 -d~
+ha_oracle_rg:cl_swap_IP_address[+1211] cllsif -J ~ -Sw -n 192.168.2.11
NET=net_ether_01
+ha_oracle_rg:cl_swap_IP_address[+1212] +ha_oracle_rg:cl_swap_IP_address[+1212] cut -f3 -d~
+ha_oracle_rg:cl_swap_IP_address[+1212] cllsnw -J ~ -Sw -n net_ether_01
ALIAS=true
+ha_oracle_rg:cl_swap_IP_address[+1213] +ha_oracle_rg:cl_swap_IP_address[+1213] cut -f4 -d~
+ha_oracle_rg:cl_swap_IP_address[+1213] cllsif -J ~ -Sw -n 192.168.2.11
NET_TYPE=ether
+ha_oracle_rg:cl_swap_IP_address[+1213] [[ ether = hps ]]
+ha_oracle_rg:cl_swap_IP_address[+1213] [[ true = true ]]
+ha_oracle_rg:cl_swap_IP_address[+1233] [ release = acquire ]
+ha_oracle_rg:cl_swap_IP_address[+1271] cl_echo 7320 cl_swap_IP_address: Removing aliased IP address 192.168.4.13 from adapter en1 cl_swap_IP_address 192.168.4.13 en1
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 24 2013 19:22:23
Mar 24 2013 19:22:23 +ha_oracle_rg:cl_echo[+96] MSG_ID=7320
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 7320 cl_swap_IP_address: Removing aliased IP address 192.168.4.13 from adapter en1 cl_swap_IP_address 192.168.4.13 en1
+ha_oracle_rg:cl_echo[+98] 1>& 2
cl_swap_IP_address: Removing aliased IP address 192.168.4.13 from adapter en1+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_swap_IP_address[+1273] PERSISTENT=
+ha_oracle_rg:cl_swap_IP_address[+1274] ADDR1=192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[+1274] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1275] disable_pmtu_gated
Setting tcp_pmtu_discover to 0
Setting udp_pmtu_discover to 0
+ha_oracle_rg:cl_swap_IP_address[+1276] alias_replace_routes /usr/es/sbin/cluster/.restore_routes en1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+5] RR=/usr/es/sbin/cluster/.restore_routes
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+6] shift
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+7] interfaces=en1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+9] cp /dev/null /usr/es/sbin/cluster/.restore_routes
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+11] cat
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+11] 1> /usr/es/sbin/cluster/.restore_routes 0<<
#!/bin/ksh
#
# Script created by cl_swap_IP_address on +ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+11] date
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+11] date
Sun Mar 24 19:22:23 BEIST 2013
#
PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin
PS4='${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}$LINENO] '
export VERBOSE_LOGGING=${VERBOSE_LOGGING:-"high"}
[[ "$VERBOSE_LOGGING" = "high" ]] && set -x
: Starting $0 at $(date)
#
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+11] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+26] +ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+26] awk $3 !~ "[Ll]ink" && $3 !~ ":" {print $4}
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+26] netstat -in
LOCADDRS=Address
192.168.2.11
192.168.4.13
192.168.3.11
192.168.4.11
127.0.0.1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+31] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
127/8 127.0.0.1 U 1 - lo0 0 0
192.168.2.0 192.168.2.11 UHSb 1 - en1 0 0 =>
192.168.2/24 192.168.2.11 U 1 - en1 0 0
192.168.2.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.2.255 192.168.2.11 UHSb 1 - en1 0 0
192.168.3.0 192.168.3.11 UHSb 1 - en2 0 0 =>
192.168.3/24 192.168.3.11 U 1 - en2 0 0
192.168.3.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.3.255 192.168.3.11 UHSb 1 - en2 0 0
192.168.4.0 192.168.4.11 UHSb 1 WRR en2 0 0 =>
192.168.4.0 192.168.4.13 UHSb 1 -"- en1 0 0 =>
192.168.4/24 192.168.4.11 U 1 WRR en2 0 0 =>
192.168.4/24 192.168.4.13 U 1 -"- en1 0 0
192.168.4.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.13 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.255 192.168.4.11 UHSb 1 WRR en2 0 0 =>
192.168.4.255 192.168.4.13 UHSb 1 -"- en1 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+39] typeset -i I=1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+41] NXTSVC=
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+41] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+44] +ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+44] awk $3 !~ "[Ll]ink" && $3 !~ ":" && ($1 == "en1" || $1 == "en1*") {print $4}
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+44] netstat -in
IFADDRS=192.168.2.11
192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+60] +ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+60] cllsif -J ~ -Spi ha_db01
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+60] grep ~net_ether_01~
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+60] grep -E ~service~|~persistent~
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+60] cut -d~ -f7
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+60] sort -u
SVCADDRS=192.168.4.11
192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+64] +ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+64] awk $1 !~ ":" {print $1}
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+64] echo 192.168.4.11
192.168.4.13
SVCADDRS=192.168.4.11
192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+68] +ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+68] cllsif -J ~ -Spi ha_db01
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+68] grep ~net_ether_01~
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+68] grep -E ~persistent~
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+68] cut -d~ -f7
PERSISTENT_IP=192.168.4.11
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] routeaddr=
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] clgetnet 192.168.2.11 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] clgetnet 192.168.4.11 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] [[ 192.168.2.0 = 192.168.4.0 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] clgetnet 192.168.2.11 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] clgetnet 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] [[ 192.168.2.0 = 192.168.4.0 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] [[ -n ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] clgetnet 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] clgetnet 192.168.4.11 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] [[ 192.168.4.0 = 192.168.4.0 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+69] [[ -z ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] routeaddr=192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] [[ 192.168.4.13 = 192.168.4.11 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] clgetnet 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] clgetnet 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] [[ 192.168.4.0 = 192.168.4.0 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] [[ -z 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] [[ 192.168.4.13 != 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+78] [[ -n ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+93] swaproute=0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+93] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+95] NETSTAT_FLAGS=-nrf inet
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+95] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+103] swaproute=1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] netstat -nrf inet
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] fgrep -w en1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] read DEST GW FLAGS OTHER
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] LOOPBACK=127.0.0.1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.2.11 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.11 = 192.168.2.11 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] read DEST GW FLAGS OTHER
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] LOOPBACK=127.0.0.1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] clgetnet 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] clgetnet 192.168.2.11 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.0 = 192.168.2.0 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] read DEST GW FLAGS OTHER
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] LOOPBACK=127.0.0.1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.2.11 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.11 = 192.168.2.11 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] read DEST GW FLAGS OTHER
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] LOOPBACK=127.0.0.1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ != ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+133] route delete -host 192.168.4.0 192.168.4.13
192.168.4.13 host 192.168.4.0: gateway 192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] read DEST GW FLAGS OTHER
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] LOOPBACK=127.0.0.1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] clgetnet 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] clgetnet 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.0 = 192.168.4.0 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ != ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+118] route delete -net 192.168.4/24 192.168.4.13
192.168.4.13 net 192.168.4: gateway 192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] read DEST GW FLAGS OTHER
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] LOOPBACK=127.0.0.1
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ != ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+107] [[ 192.168.4.13 = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+133] route delete -host 192.168.4.255 192.168.4.13
192.168.4.13 host 192.168.4.255: gateway 192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+106] read DEST GW FLAGS OTHER
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+192] chmod +x /usr/es/sbin/cluster/.restore_routes
+ha_oracle_rg:cl_swap_IP_address[alias_replace_routes+193] return 0
+ha_oracle_rg:cl_swap_IP_address[+1276] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1280] clifconfig en1 delete 192.168.4.13
+ha_oracle_rg:clifconfig[51] [[ high == high ]]
+ha_oracle_rg:clifconfig[51] version=1.3
+ha_oracle_rg:clifconfig[53] set -A args en1 delete 192.168.4.13
+ha_oracle_rg:clifconfig[60] [[ -n en1 ]]
+ha_oracle_rg:clifconfig[60] [[ e != - ]]
+ha_oracle_rg:clifconfig[61] interface=en1
+ha_oracle_rg:clifconfig[62] shift
+ha_oracle_rg:clifconfig[64] [[ -n delete ]]
+ha_oracle_rg:clifconfig[67] delete=1
+ha_oracle_rg:clifconfig[90] shift
+ha_oracle_rg:clifconfig[64] [[ -n 192.168.4.13 ]]
+ha_oracle_rg:clifconfig[69] params=' address=192.168.4.13'
+ha_oracle_rg:clifconfig[69] addr=192.168.4.13
+ha_oracle_rg:clifconfig[90] shift
+ha_oracle_rg:clifconfig[64] [[ -n '' ]]
+ha_oracle_rg:clifconfig[93] [[ -n 1 ]]
+ha_oracle_rg:clifconfig[93] [[ -n ha_oracle_rg ]]
+ha_oracle_rg:clifconfig[94] clwparname ha_oracle_rg
+ha_oracle_rg:clwparname[35] [[ high == high ]]
+ha_oracle_rg:clwparname[35] version=1.3
+ha_oracle_rg:clwparname[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+ha_oracle_rg:clwparname[+20] ERRNO=0
+ha_oracle_rg:clwparname[+22] [[ high == high ]]
+ha_oracle_rg:clwparname[+22] set -x
+ha_oracle_rg:clwparname[+23] [[ high == high ]]
+ha_oracle_rg:clwparname[+23] version=1.10
+ha_oracle_rg:clwparname[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+ha_oracle_rg:clwparname[+20] [[ high == high ]]
+ha_oracle_rg:clwparname[+20] set -x
+ha_oracle_rg:clwparname[+21] [[ high == high ]]
+ha_oracle_rg:clwparname[+21] version=1.4
+ha_oracle_rg:clwparname[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+ha_oracle_rg:clwparname[+24] export PATH
+ha_oracle_rg:clwparname[+26] typeset usageErr invalArgErr internalErr
+ha_oracle_rg:clwparname[+28] usageErr=10
+ha_oracle_rg:clwparname[+29] invalArgErr=11
+ha_oracle_rg:clwparname[+30] internalErr=12
+ha_oracle_rg:clwparname[+39] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[+42] uname
+ha_oracle_rg:clwparname[+42] OSNAME=AIX
+ha_oracle_rg:clwparname[+51] [[ AIX == *AIX* ]]
+ha_oracle_rg:clwparname[+54] lslpp -l bos.wpars
+ha_oracle_rg:clwparname[+54] 1> /dev/null 2>& 1
+ha_oracle_rg:clwparname[+56] loadWparName ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+5] usage='Usage: loadWparName <resource group name>'
+ha_oracle_rg:clwparname[loadWparName+5] typeset -r usage
+ha_oracle_rg:clwparname[loadWparName+6] typeset rgName wparName wparDir rc
+ha_oracle_rg:clwparname[loadWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clwparname[loadWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clwparname[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clwparname[loadWparName+22] [[ -f /var/hacmp/adm/wpar/ha_oracle_rg ]]
+ha_oracle_rg:clwparname[loadWparName+23] cat /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+23] wparName=''
+ha_oracle_rg:clwparname[loadWparName+24] [[ -n '' ]]
+ha_oracle_rg:clwparname[loadWparName+36] return 0
+ha_oracle_rg:clwparname[+56] wparName=''
+ha_oracle_rg:clwparname[+57] rc=0
+ha_oracle_rg:clwparname[+58] (( 0 != 0 ))
+ha_oracle_rg:clwparname[+64] printf %s
+ha_oracle_rg:clwparname[+65] exit 0
+ha_oracle_rg:clifconfig[94] WPARNAME=''
+ha_oracle_rg:clifconfig[95] [[ -n '' ]]
+ha_oracle_rg:clifconfig[113] ifconfig en1 delete 192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[+1280] [[ -n ]]
+ha_oracle_rg:cl_swap_IP_address[+1303] /usr/es/sbin/cluster/.restore_routes
+ha_oracle_rg:.restore_routes[+9] date
+ha_oracle_rg:.restore_routes[+9] : Starting /usr/es/sbin/cluster/.restore_routes at Sun Mar 24 19:22:23 BEIST 2013
+ha_oracle_rg:cl_swap_IP_address[+1304] : Completed /usr/es/sbin/cluster/.restore_routes with return code 0.
+ha_oracle_rg:cl_swap_IP_address[+1304] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1305] enable_pmtu_gated
Setting tcp_pmtu_discover to 1
Setting udp_pmtu_discover to 1
+ha_oracle_rg:cl_swap_IP_address[+1308] cl_hats_adapter en1 -d 192.168.4.13 alias
+ha_oracle_rg:cl_hats_adapter[+50] [[ high = high ]]
+ha_oracle_rg:cl_hats_adapter[+50] version=1.40
+ha_oracle_rg:cl_hats_adapter[+51] +ha_oracle_rg:cl_hats_adapter[+51] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_hats_adapter[+52] +ha_oracle_rg:cl_hats_adapter[+52] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:cl_hats_adapter[+57] [[ -f /usr/es/sbin/cluster/clstrmgr ]]
+ha_oracle_rg:cl_hats_adapter[+62] clcheck_server grpsvcs
+ha_oracle_rg:clcheck_server[95] [[ high == high ]]
+ha_oracle_rg:clcheck_server[95] version=1.10.1.5
+ha_oracle_rg:clcheck_server[96] cl_get_path
+ha_oracle_rg:clcheck_server[96] HA_DIR=es
+ha_oracle_rg:clcheck_server[98] SERVER=grpsvcs
+ha_oracle_rg:clcheck_server[99] STATUS=0
+ha_oracle_rg:clcheck_server[100] FATAL_ERROR=255
+ha_oracle_rg:clcheck_server[101] retries=0
+ha_oracle_rg:clcheck_server[101] typeset -i retries
+ha_oracle_rg:clcheck_server[103] [[ -n grpsvcs ]]
+ha_oracle_rg:clcheck_server[110] [[ 0 < 3 ]]
+ha_oracle_rg:clcheck_server[114] lssrc -s grpsvcs
+ha_oracle_rg:clcheck_server[114] 1> /dev/null 2> /dev/null
+ha_oracle_rg:clcheck_server[128] egrep 'stop|active'
+ha_oracle_rg:clcheck_server[128] lssrc -s grpsvcs
+ha_oracle_rg:clcheck_server[128] LC_ALL=C
+ha_oracle_rg:clcheck_server[128] check_if_down=' grpsvcs grpsvcs 7930022 active'
+ha_oracle_rg:clcheck_server[133] [[ -z ' grpsvcs grpsvcs 7930022 active' ]]
+ha_oracle_rg:clcheck_server[154] check_server_extended grpsvcs
+ha_oracle_rg:clcheck_server[48] PS4_FUNC=check_server_extended
+ha_oracle_rg:clcheck_server[48] typeset PS4_FUNC
+ha_oracle_rg:clcheck_server[51] SERVER=grpsvcs
+ha_oracle_rg:clcheck_server[51] typeset SERVER
+ha_oracle_rg:clcheck_server[52] STATUS=1
+ha_oracle_rg:clcheck_server[52] typeset STATUS
+ha_oracle_rg:clcheck_server[60] grep -q CLSTRMGR_
+ha_oracle_rg:clcheck_server[60] lssrc -ls grpsvcs
+ha_oracle_rg:clcheck_server[60] LC_ALL=C
+ha_oracle_rg:clcheck_server[67] echo 1
+ha_oracle_rg:clcheck_server[68] return
+ha_oracle_rg:clcheck_server[154] STATUS=1
+ha_oracle_rg:clcheck_server[155] return 1
+ha_oracle_rg:cl_hats_adapter[+72] [[ 1 == 0 ]]
+ha_oracle_rg:cl_hats_adapter[+79] IF=en1
+ha_oracle_rg:cl_hats_adapter[+91] FLAG=-d
+ha_oracle_rg:cl_hats_adapter[+94] ADDRESS=192.168.4.13
+ha_oracle_rg:cl_hats_adapter[+96] ADDRESS1=alias
+ha_oracle_rg:cl_hats_adapter[+99] USEHWAT=FALSE
+ha_oracle_rg:cl_hats_adapter[+102] INDARE=FALSE
+ha_oracle_rg:cl_hats_adapter[+104] [[ alias == hwat ]]
+ha_oracle_rg:cl_hats_adapter[+104] [[ alias == dare ]]
+ha_oracle_rg:cl_hats_adapter[+118] cl_migcheck HAES
+ha_oracle_rg:cl_hats_adapter[+119] [ 0 -eq 1 ]
+ha_oracle_rg:cl_hats_adapter[+123] cldomain
+ha_oracle_rg:cl_hats_adapter[+123] export HA_DOMAIN_NAME=ha_oracle
+ha_oracle_rg:cl_hats_adapter[+125] export HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+ha_oracle_rg:cl_hats_adapter[+127] set -u
+ha_oracle_rg:cl_hats_adapter[+132] [ alias = alias ]
+ha_oracle_rg:cl_hats_adapter[+136] hats_adapter_notify en1 -d 192.168.4.13 alias
2013-03-24T19:22:24.050358 hats_adapter_notify
2013-03-24T19:22:24.068532 hats_adapter_notify
+ha_oracle_rg:cl_hats_adapter[+137] : rc_hats_adapter_notify = 0
+ha_oracle_rg:cl_hats_adapter[+139] exit 0
+ha_oracle_rg:cl_swap_IP_address[+1312] check_alias_status en1 192.168.4.13 release
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+5] CH_INTERFACE=en1
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+6] CH_ADDRESS=192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+7] CH_ACQ_OR_RLSE=release
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+15] IF_IB=en1
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] +ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] awk {print index($0, "ib")}
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] echo en1
IS_IB=0
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+21] [ 0 -ne 1 ]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] +ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] clifconfig en1
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] awk {print $2}
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] fgrep -w 192.168.4.13
+ha_oracle_rg:clifconfig[51] [[ high == high ]]
+ha_oracle_rg:clifconfig[51] version=1.3
+ha_oracle_rg:clifconfig[53] set -A args en1
+ha_oracle_rg:clifconfig[60] [[ -n en1 ]]
+ha_oracle_rg:clifconfig[60] [[ e != - ]]
+ha_oracle_rg:clifconfig[61] interface=en1
+ha_oracle_rg:clifconfig[62] shift
+ha_oracle_rg:clifconfig[64] [[ -n '' ]]
+ha_oracle_rg:clifconfig[93] [[ -n '' ]]
+ha_oracle_rg:clifconfig[113] ifconfig en1
ADDR=
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+31] [ release = acquire ]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+31] [[ = 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+46] return 0
+ha_oracle_rg:cl_swap_IP_address[+1312] [[ 0 -ne 0 ]]
+ha_oracle_rg:cl_swap_IP_address[+1327] flush_arp
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] arp -an
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] grep \?
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] tr -d ()
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.3.12
192.168.3.12 (192.168.3.12) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.2.10
192.168.2.10 (192.168.2.10) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.2.12
192.168.2.12 (192.168.2.12) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.4.12
192.168.4.12 (192.168.4.12) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+7] return 0
+ha_oracle_rg:cl_swap_IP_address[+1489] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en1 1500 link#2 0.9.6b.2e.1c.44 194809 0 101850 4 0
en1 1500 192.168.2 192.168.2.11 194809 0 101850 4 0
en2 1500 link#3 0.11.25.bf.a8.6b 8907 0 5799 3 0
en2 1500 192.168.3 192.168.3.11 8907 0 5799 3 0
en2 1500 192.168.4 192.168.4.11 8907 0 5799 3 0
lo0 16896 link#1 33418 0 33418 0 0
lo0 16896 127 127.0.0.1 33418 0 33418 0 0
lo0 16896 ::1%1 33418 0 33418 0 0
+ha_oracle_rg:cl_swap_IP_address[+1490] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
127/8 127.0.0.1 U 1 - lo0 0 0
192.168.2.0 192.168.2.11 UHSb 1 - en1 0 0 =>
192.168.2/24 192.168.2.11 U 1 - en1 0 0
192.168.2.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.2.255 192.168.2.11 UHSb 1 - en1 0 0
192.168.3.0 192.168.3.11 UHSb 1 - en2 0 0 =>
192.168.3/24 192.168.3.11 U 1 - en2 0 0
192.168.3.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.3.255 192.168.3.11 UHSb 1 - en2 0 0
192.168.4.0 192.168.4.11 UHSb 1 WRR en2 0 0 =>
192.168.4/24 192.168.4.11 U 1 WRR en2 0 0
192.168.4.11 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.255 192.168.4.11 UHSb 1 WRR en2 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+ha_oracle_rg:cl_swap_IP_address[+1956] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1956] restore_ipignoreredirects
Setting ipignoreredirects to 0
+ha_oracle_rg:cl_swap_IP_address[+1958] cl_echo 32 Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0. Exit status = 0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0 0
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 24 2013 19:22:24
Mar 24 2013 19:22:24 +ha_oracle_rg:cl_echo[+96] MSG_ID=32
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 32 Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0. Exit status = 0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0 0
+ha_oracle_rg:cl_echo[+98] 1>& 2
Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.11 192.168.4.13 255.255.255.0. Exit status = 0+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_swap_IP_address[+1960] date
Sun Mar 24 19:22:24 BEIST 2013
+ha_oracle_rg:cl_swap_IP_address[+1962] exit 0
+ha_oracle_rg:release_service_addr[+342] RC=0
+ha_oracle_rg:release_service_addr[+345] [ 0 -ne 0 ]
+ha_oracle_rg:release_service_addr[+367] ALLNOERRSRV=All_nonerror_service_addrs
+ha_oracle_rg:release_service_addr[+367] [[ REAL = EMUL ]]
+ha_oracle_rg:release_service_addr[+373] cl_RMupdate resource_down All_nonerror_service_addrs release_service_addr
2013-03-24T19:22:24.248613
2013-03-24T19:22:24.262602
Reference string: Sun.Mar.24.19:22:24.BEIST.2013.release_service_addr.All_nonerror_service_addrs.ha_oracle_rg.ref
+ha_oracle_rg:release_service_addr[+378] [[ UNDEFINED != UNDEFINED ]]
+ha_oracle_rg:release_service_addr[+381] export NSORDER=
+ha_oracle_rg:release_service_addr[+384] exit 0
Mar 24 19:22:24 EVENT COMPLETED: release_service_addr ser 0
+ha_oracle_rg:node_down_local[494] : exit status of release_addr is: 0
+ha_oracle_rg:node_down_local[504] clstop_wpar
+ha_oracle_rg:clstop_wpar[28] [[ high == high ]]
+ha_oracle_rg:clstop_wpar[28] version=1.5
+ha_oracle_rg:clstop_wpar[30] uname
+ha_oracle_rg:clstop_wpar[30] OSNAME=AIX
+ha_oracle_rg:clstop_wpar[39] [[ AIX == *AIX* ]]
+ha_oracle_rg:clstop_wpar[40] lslpp -l bos.wpars
+ha_oracle_rg:clstop_wpar[40] 1> /dev/null 2>& 1
+ha_oracle_rg:clstop_wpar[42] . /usr/es/sbin/cluster/wpar/wpar_utils
+ha_oracle_rg:clstop_wpar[+20] ERRNO=0
+ha_oracle_rg:clstop_wpar[+22] [[ high == high ]]
+ha_oracle_rg:clstop_wpar[+22] set -x
+ha_oracle_rg:clstop_wpar[+23] [[ high == high ]]
+ha_oracle_rg:clstop_wpar[+23] version=1.10
+ha_oracle_rg:clstop_wpar[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+ha_oracle_rg:clstop_wpar[+20] [[ high == high ]]
+ha_oracle_rg:clstop_wpar[+20] set -x
+ha_oracle_rg:clstop_wpar[+21] [[ high == high ]]
+ha_oracle_rg:clstop_wpar[+21] version=1.4
+ha_oracle_rg:clstop_wpar[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+ha_oracle_rg:clstop_wpar[+24] export PATH
+ha_oracle_rg:clstop_wpar[+26] typeset usageErr invalArgErr internalErr
+ha_oracle_rg:clstop_wpar[+28] usageErr=10
+ha_oracle_rg:clstop_wpar[+29] invalArgErr=11
+ha_oracle_rg:clstop_wpar[+30] internalErr=12
+ha_oracle_rg:clstop_wpar[+45] [[ node_down_local == reconfig_resource_release ]]
+ha_oracle_rg:clstop_wpar[+51] typeset wparName state
+ha_oracle_rg:clstop_wpar[+52] typeset -i result
+ha_oracle_rg:clstop_wpar[+54] loadWparName ha_oracle_rg
+ha_oracle_rg:clstop_wpar[loadWparName+5] usage='Usage: loadWparName <resource group name>'
+ha_oracle_rg:clstop_wpar[loadWparName+5] typeset -r usage
+ha_oracle_rg:clstop_wpar[loadWparName+6] typeset rgName wparName wparDir rc
+ha_oracle_rg:clstop_wpar[loadWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clstop_wpar[loadWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clstop_wpar[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clstop_wpar[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clstop_wpar[loadWparName+22] [[ -f /var/hacmp/adm/wpar/ha_oracle_rg ]]
+ha_oracle_rg:clstop_wpar[loadWparName+23] cat /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clstop_wpar[loadWparName+23] wparName=''
+ha_oracle_rg:clstop_wpar[loadWparName+24] [[ -n '' ]]
+ha_oracle_rg:clstop_wpar[loadWparName+36] return 0
+ha_oracle_rg:clstop_wpar[+54] wparName=''
+ha_oracle_rg:clstop_wpar[+56] [[ -z '' ]]
+ha_oracle_rg:clstop_wpar[+57] clearWparName ha_oracle_rg
+ha_oracle_rg:clstop_wpar[clearWparName+5] usage='Usage: clearWparName <resource group name>'
+ha_oracle_rg:clstop_wpar[clearWparName+5] typeset -r usage
+ha_oracle_rg:clstop_wpar[clearWparName+6] typeset rgName wparName wparDir
+ha_oracle_rg:clstop_wpar[clearWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clstop_wpar[clearWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clstop_wpar[clearWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clstop_wpar[clearWparName+16] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clstop_wpar[clearWparName+18] rm -f /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clstop_wpar[+58] exit 0
+ha_oracle_rg:node_down_local[505] RC=0
+ha_oracle_rg:node_down_local[506] : exit status of clstop_wpar is: 0
+ha_oracle_rg:node_down_local[508] (( 0 != 0 ))
+ha_oracle_rg:node_down_local[517] (( 0 != 0 ))
+ha_oracle_rg:node_down_local[524] [[ '' != CLEANUP ]]
+ha_oracle_rg:node_down_local[532] set +u
+ha_oracle_rg:node_down_local[533] NOT_DOIT=''
+ha_oracle_rg:node_down_local[534] set -u
+ha_oracle_rg:node_down_local[535] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[537] [[ REAL == EMUL ]]
+ha_oracle_rg:node_down_local[545] clchdaemons -r -d clstrmgr_scripts -t resource_locator -o ha_oracle_rg
+ha_oracle_rg:node_down_local[554] cl_RMupdate rg_down ha_oracle_rg node_down_local
2013-03-24T19:22:24.472150
2013-03-24T19:22:24.484454
Reference string: Sun.Mar.24.19:22:24.BEIST.2013.node_down_local.ha_oracle_rg.ref
+ha_oracle_rg:node_down_local[559] exit 0
Mar 24 19:22:24 EVENT COMPLETED: node_down_local 0
+ha_oracle_rg:rg_move[+241] [ 0 -ne 0 ]
+ha_oracle_rg:rg_move[+247] UPDATESTATD=1
+ha_oracle_rg:rg_move[+254] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:22:24.608690 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] GROUPNAME=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] export GROUPNAME
+ha_oracle_rg:process_resources[2748] break
+ha_oracle_rg:process_resources[2759] : If sddsrv was turned off above, turn it back on again
+ha_oracle_rg:process_resources[2761] [[ FALSE == TRUE ]]
+ha_oracle_rg:process_resources[2767] exit 0
+ha_oracle_rg:rg_move[+292] [ -f /tmp/.NFSSTOPPED ]
+ha_oracle_rg:rg_move[+312] [ -f /tmp/.RPCLOCKDSTOPPED ]
+ha_oracle_rg:rg_move[+337] exit 0
Mar 24 19:22:24 EVENT COMPLETED: rg_move ha_db01 1 RELEASE 0
:rg_move_release[+68] exit 0
Mar 24 19:22:24 EVENT COMPLETED: rg_move_release ha_db01 1 0
Mar 24 19:22:26 EVENT START: rg_move_fence ha_db01 1
:rg_move_fence[+57] [[ high = high ]]
:rg_move_fence[+57] version=1.11
:rg_move_fence[+58] export NODENAME=ha_db01
:rg_move_fence[+60] set -u
:rg_move_fence[+62] [ 2 != 2 ]
:rg_move_fence[+68] set +u
:rg_move_fence[+70] [[ -z FALSE ]]
:rg_move_fence[+75] [[ FALSE = TRUE ]]
:rg_move_fence[+98] process_resources FENCE
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA FENCE
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa FENCE
2013-03-24T19:22:26.944924 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:rg_move_fence[+99] : exit status of process_resources FENCE is: 0
:rg_move_fence[+102] [[ FALSE = TRUE ]]
:rg_move_fence[+136] exit 0
Mar 24 19:22:26 EVENT COMPLETED: rg_move_fence ha_db01 1 0
Mar 24 19:22:27 EVENT START: rg_move_acquire ha_db01 1
:rg_move_acquire[+54] [[ high = high ]]
:rg_move_acquire[+54] version=1.9
:rg_move_acquire[+56] set -u
:rg_move_acquire[+58] [ 2 != 2 ]
:rg_move_acquire[+64] set +u
:rg_move_acquire[+67] clcallev rg_move ha_db01 1 ACQUIRE
Mar 24 19:22:27 EVENT START: rg_move ha_db01 1 ACQUIRE
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=0057D5EC4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db01
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db01 ]]
:get_local_nodename[+72] print ha_db01
:get_local_nodename[+73] exit 0
:rg_move[+71] version=1.49
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=ha_db01
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=ACQUIRE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] egrep group =
:rg_move[+104] awk {print $3}
:rg_move[+104] eval RGNAME="ha_oracle_rg"
:rg_move[+104] RGNAME=ha_oracle_rg
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_ha_oracle_rg_ha_db01
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_ha_oracle_rg_ha_db01
:rg_move[+118] print ONLINE
:rg_move[+118] export RG_MOVE_ONLINE=ONLINE
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=ONLINE
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] :rg_move[+136] clsetenvgrp ha_db01 rg_move ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db01 rg_move ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=NFS_ha_oracle_rg="TRANS"
NFSNODE_ha_oracle_rg="ha_db02"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="R"
ASSOCIATE_ACTIONS="MO"
AUXILLIARY_ACTIONS="N"
:rg_move[+137] RC=0
:rg_move[+138] eval NFS_ha_oracle_rg="TRANS"
NFSNODE_ha_oracle_rg="ha_db02"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="R"
ASSOCIATE_ACTIONS="MO"
AUXILLIARY_ACTIONS="N"
:rg_move[+138] NFS_ha_oracle_rg=TRANS
:rg_move[+138] NFSNODE_ha_oracle_rg=ha_db02
:rg_move[+138] FORCEDOWN_GROUPS=
:rg_move[+138] RESOURCE_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_GROUPS=
:rg_move[+138] HOMELESS_FOLLOWER_GROUPS=
:rg_move[+138] ERRSTATE_GROUPS=
:rg_move[+138] PRINCIPAL_ACTIONS=R
:rg_move[+138] ASSOCIATE_ACTIONS=MO
:rg_move[+138] AUXILLIARY_ACTIONS=N
:rg_move[+139] set +a
:rg_move[+143] [[ 0 -ne 0 ]]
:rg_move[+143] [[ -z ha_oracle_rg ]]
:rg_move[+152] [[ -z FALSE ]]
:rg_move[+198] set -a
:rg_move[+199] clsetenvres ha_oracle_rg rg_move
:rg_move[+199] eval PRINCIPAL_ACTION="RELEASE" ASSOCIATE_ACTION="MOUNT" AUXILLIARY_ACTION="NONE" VG_RR_ACTION="RELEASE" SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION="NONE" NFS_HOST=ha_db02 DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS="ha_app" FILESYSTEM="ALL" FORCED_VARYON="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" RECOVERY_METHOD="sequential" SERVICE_LABEL="ser" SSA_DISK_FENCING="false" VG_AUTO_IMPORT="false" VOLUME_GROUP="datavg"
:rg_move[+199] PRINCIPAL_ACTION=RELEASE ASSOCIATE_ACTION=MOUNT AUXILLIARY_ACTION=NONE VG_RR_ACTION=RELEASE SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION=NONE NFS_HOST=ha_db02 DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS=ha_app FILESYSTEM=ALL FORCED_VARYON=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false RECOVERY_METHOD=sequential SERVICE_LABEL=ser SSA_DISK_FENCING=false VG_AUTO_IMPORT=false VOLUME_GROUP=datavg
:rg_move[+200] set +a
:rg_move[+201] export GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move[+201] [[ ACQUIRE = ]]
+ha_oracle_rg:rg_move[+225] [ ACQUIRE = RELEASE ]
+ha_oracle_rg:rg_move[+231] [ ACQUIRE = ACQUIRE ]
+ha_oracle_rg:rg_move[+233] [ RELEASE = ACQUIRE ]
+ha_oracle_rg:rg_move[+233] [ NONE = ACQUIRE_SECONDARY ]
+ha_oracle_rg:rg_move[+241] [ 0 -ne 0 ]
+ha_oracle_rg:rg_move[+247] UPDATESTATD=1
+ha_oracle_rg:rg_move[+254] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:22:27.370768 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] GROUPNAME=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] export GROUPNAME
+ha_oracle_rg:process_resources[2748] break
+ha_oracle_rg:process_resources[2759] : If sddsrv was turned off above, turn it back on again
+ha_oracle_rg:process_resources[2761] [[ FALSE == TRUE ]]
+ha_oracle_rg:process_resources[2767] exit 0
+ha_oracle_rg:rg_move[+292] [ -f /tmp/.NFSSTOPPED ]
+ha_oracle_rg:rg_move[+312] [ -f /tmp/.RPCLOCKDSTOPPED ]
+ha_oracle_rg:rg_move[+337] exit 0
Mar 24 19:22:27 EVENT COMPLETED: rg_move ha_db01 1 ACQUIRE 0
:rg_move_acquire[+68] exit_status=0
:rg_move_acquire[+69] : exit status of clcallev rg_move ha_db01 1 ACQUIRE is: 0
:rg_move_acquire[+70] exit 0
Mar 24 19:22:27 EVENT COMPLETED: rg_move_acquire ha_db01 1 0
Mar 24 19:22:36 EVENT START: rg_move_complete ha_db01 1
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=0057D5EC4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db01
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db01 ]]
:get_local_nodename[+72] print ha_db01
:get_local_nodename[+73] exit 0
:rg_move_complete[+94] [[ high = high ]]
:rg_move_complete[+94] version=1.37
:rg_move_complete[+95] :rg_move_complete[+95] cl_get_path
HA_DIR=es
:rg_move_complete[+104] STATUS=0
:rg_move_complete[+106] [ ! -n ]
:rg_move_complete[+108] EMULATE=REAL
:rg_move_complete[+111] set -u
:rg_move_complete[+113] [ 2 -lt 2 -o 2 -gt 3 ]
:rg_move_complete[+119] export NODENAME=ha_db01
:rg_move_complete[+120] RGID=1
:rg_move_complete[+121] [ 2 -eq 3 ]
:rg_move_complete[+125] RGDESTINATION=
:rg_move_complete[+130] odmget -qid=1 HACMPgroup
:rg_move_complete[+130] egrep group =
:rg_move_complete[+130] awk {print $3}
:rg_move_complete[+130] eval RGNAME="ha_oracle_rg"
:rg_move_complete[+130] RGNAME=ha_oracle_rg
:rg_move_complete[+131] GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move_complete[+133] UPDATESTATD=0
+ha_oracle_rg:rg_move_complete[+134] NFSSTOPPED=0
+ha_oracle_rg:rg_move_complete[+138] odmget HACMPnode
+ha_oracle_rg:rg_move_complete[+138] grep name =
+ha_oracle_rg:rg_move_complete[+138] sort
+ha_oracle_rg:rg_move_complete[+138] uniq
+ha_oracle_rg:rg_move_complete[+138] wc -l
+ha_oracle_rg:rg_move_complete[+138] [ 2 -eq 2 ]
+ha_oracle_rg:rg_move_complete[+141] +ha_oracle_rg:rg_move_complete[+141] odmget HACMPgroup
+ha_oracle_rg:rg_move_complete[+141] grep group =
+ha_oracle_rg:rg_move_complete[+141] awk {print $3}
+ha_oracle_rg:rg_move_complete[+141] sed s/"//g
RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:rg_move_complete[+145] +ha_oracle_rg:rg_move_complete[+145] odmget -q group=ha_oracle_rg AND name=EXPORT_FILESYSTEM HACMPresource
+ha_oracle_rg:rg_move_complete[+145] grep value =
+ha_oracle_rg:rg_move_complete[+145] awk {print $3}
+ha_oracle_rg:rg_move_complete[+145] sed s/"//g
EXPORTLIST=
+ha_oracle_rg:rg_move_complete[+145] [[ -n ]]
+ha_oracle_rg:rg_move_complete[+170] set -a
+ha_oracle_rg:rg_move_complete[+171] +ha_oracle_rg:rg_move_complete[+171] clsetenvgrp ha_db01 rg_move_complete ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db01 rg_move_complete ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
+ha_oracle_rg:rg_move_complete[+172] RC=0
+ha_oracle_rg:rg_move_complete[+173] eval FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
+ha_oracle_rg:rg_move_complete[+173] FORCEDOWN_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] RESOURCE_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] HOMELESS_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] HOMELESS_FOLLOWER_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] ERRSTATE_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] PRINCIPAL_ACTIONS=
+ha_oracle_rg:rg_move_complete[+173] ASSOCIATE_ACTIONS=
+ha_oracle_rg:rg_move_complete[+173] AUXILLIARY_ACTIONS=
+ha_oracle_rg:rg_move_complete[+174] set +a
+ha_oracle_rg:rg_move_complete[+175] [ 0 -ne 0 ]
+ha_oracle_rg:rg_move_complete[+249] [ 0 = 1 ]
+ha_oracle_rg:rg_move_complete[+286] [ 0 = 1 ]
+ha_oracle_rg:rg_move_complete[+362] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:22:36.311456 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
+ha_oracle_rg:rg_move_complete[+363] STATUS=0
+ha_oracle_rg:rg_move_complete[+364] : exit status of process_resources is: 0
+ha_oracle_rg:rg_move_complete[+368] [[ FALSE = TRUE ]]
+ha_oracle_rg:rg_move_complete[+392] exit 0
Mar 24 19:22:36 EVENT COMPLETED: rg_move_complete ha_db01 1 0
HACMP Event Summary
Event: TE_RG_MOVE
Start time: Sun Mar 24 19:21:58 2013
End time: Sun Mar 24 19:22:36 2013
Action: Resource: Script Name:
----------------------------------------------------------------------------
Releasing resource group: ha_oracle_rg node_down_local
Search on: Sun.Mar.24.19:21:58.BEIST.2013.node_down_local.ha_oracle_rg.ref
Releasing resource: All_servers stop_server
Search on: Sun.Mar.24.19:21:58.BEIST.2013.stop_server.All_servers.ha_oracle_rg.ref
Resource offline: All_nonerror_servers stop_server
Search on: Sun.Mar.24.19:22:20.BEIST.2013.stop_server.All_nonerror_servers.ha_oracle_rg.ref
Releasing resource: All_filesystems cl_deactivate_fs
Search on: Sun.Mar.24.19:22:20.BEIST.2013.cl_deactivate_fs.All_filesystems.ha_oracle_rg.ref
Resource offline: All_non_error_filesystems cl_deactivate_fs
Search on: Sun.Mar.24.19:22:21.BEIST.2013.cl_deactivate_fs.All_non_error_filesystems.ha_oracle_rg.ref
Releasing resource: All_volume_groups cl_deactivate_vgs
Search on: Sun.Mar.24.19:22:21.BEIST.2013.cl_deactivate_vgs.All_volume_groups.ha_oracle_rg.ref
Resource offline: All_nonerror_volume_groups cl_deactivate_vgs
Search on: Sun.Mar.24.19:22:22.BEIST.2013.cl_deactivate_vgs.All_nonerror_volume_groups.ha_oracle_rg.ref
Releasing resource: All_service_addrs release_service_addr
Search on: Sun.Mar.24.19:22:22.BEIST.2013.release_service_addr.All_service_addrs.ha_oracle_rg.ref
Resource offline: All_nonerror_service_addrs release_service_addr
Search on: Sun.Mar.24.19:22:24.BEIST.2013.release_service_addr.All_nonerror_service_addrs.ha_oracle_rg.ref
Resource group offline: ha_oracle_rg node_down_local
Search on: Sun.Mar.24.19:22:24.BEIST.2013.node_down_local.ha_oracle_rg.ref
----------------------------------------------------------------------------
Mar 24 19:22:37 EVENT START: network_down ha_db02 net_diskhb_01
:network_down[+62] [[ high = high ]]
:network_down[+62] version=1.23
:network_down[+63] :network_down[+63] cl_get_path
HA_DIR=es
:network_down[+65] [ 2 -ne 2 ]
:network_down[+77] :network_down[+77] cl_rrmethods2call net_cleanup
:cl_rrmethods2call[+49] [[ high = high ]]
:cl_rrmethods2call[+49] version=1.17
:cl_rrmethods2call[+50] :cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
:cl_rrmethods2call[+76] RRMETHODS=
:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
:cl_rrmethods2call[+83] :cl_rrmethods2call[+83] odmget -qname=net_diskhb_01 HACMPnetwork
:cl_rrmethods2call[+83] egrep nimname
:cl_rrmethods2call[+83] awk {print $3}
:cl_rrmethods2call[+83] sed s/"//g
RRNET=diskhb
:cl_rrmethods2call[+83] [[ diskhb = XD_data ]]
:cl_rrmethods2call[+89] exit 0
METHODS=
:network_down[+91] set -u
:network_down[+104] exit 0
Mar 24 19:22:37 EVENT COMPLETED: network_down ha_db02 net_diskhb_01 0
Mar 24 19:22:37 EVENT START: network_down_complete ha_db02 net_diskhb_01
:network_down_complete[+61] [[ high = high ]]
:network_down_complete[+61] version=1.1.1.13
:network_down_complete[+62] :network_down_complete[+62] cl_get_path
HA_DIR=es
:network_down_complete[+64] [ ! -n ]
:network_down_complete[+66] EMULATE=REAL
:network_down_complete[+69] [ 2 -ne 2 ]
:network_down_complete[+75] set -u
:network_down_complete[+81] STATUS=0
:network_down_complete[+85] odmget HACMPnode
:network_down_complete[+85] grep name =
:network_down_complete[+85] sort
:network_down_complete[+85] uniq
:network_down_complete[+85] wc -l
:network_down_complete[+85] [ 2 -eq 2 ]
:network_down_complete[+87] :network_down_complete[+87] odmget HACMPgroup
:network_down_complete[+87] grep group =
:network_down_complete[+87] awk {print $3}
:network_down_complete[+87] sed s/"//g
RESOURCE_GROUPS=ha_oracle_rg
:network_down_complete[+91] :network_down_complete[+91] odmget -q group=ha_oracle_rg AND name=EXPORT_FILESYSTEM HACMPresource
:network_down_complete[+91] grep value
:network_down_complete[+91] awk {print $3}
:network_down_complete[+91] sed s/"//g
EXPORTLIST=
:network_down_complete[+92] [ -n ]
:network_down_complete[+114] cl_hb_alias_network net_diskhb_01 add
:cl_hb_alias_network[+57] [[ high = high ]]
:cl_hb_alias_network[+57] version=1.4
:cl_hb_alias_network[+58] :cl_hb_alias_network[+58] cl_get_path
HA_DIR=es
:cl_hb_alias_network[+59] :cl_hb_alias_network[+59] cl_get_path -S
OP_SEP=~
:cl_hb_alias_network[+61] NETWORK=net_diskhb_01
:cl_hb_alias_network[+62] ACTION=add
:cl_hb_alias_network[+65] [[ 2 != 2 ]]
:cl_hb_alias_network[+71] [[ add != add ]]
:cl_hb_alias_network[+77] set -u
:cl_hb_alias_network[+79] cl_echo 33 Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_diskhb_01 add\n /usr/es/sbin/cluster/utilities/cl_hb_alias_network net_diskhb_01 add
:cl_echo[+35] version=1.16
:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
:cl_echo[+89] set +u
:cl_echo[+89] [[ -n ]]
:cl_echo[+92] set -u
:cl_echo[+95] print -n -u2 Mar 24 2013 19:22:37
Mar 24 2013 19:22:37 :cl_echo[+96] MSG_ID=33
:cl_echo[+97] shift
:cl_echo[+98] dspmsg scripts.cat 33 Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_diskhb_01 add\n /usr/es/sbin/cluster/utilities/cl_hb_alias_network net_diskhb_01 add
:cl_echo[+98] 1>& 2
Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_diskhb_01 add
:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
:cl_echo[+101] 1> /dev/null 2>& 1
:cl_echo[+105] return 0
:cl_hb_alias_network[+80] date
Sun Mar 24 19:22:37 BEIST 2013
:cl_hb_alias_network[+82] :cl_hb_alias_network[+82] get_local_nodename
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=0057D5EC4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db01
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db01 ]]
:get_local_nodename[+72] print ha_db01
:get_local_nodename[+73] exit 0
LOCALNODENAME=ha_db01
:cl_hb_alias_network[+83] STATUS=0
:cl_hb_alias_network[+86] cllsnw -J ~ -Sn net_diskhb_01
:cl_hb_alias_network[+86] cut -d~ -f4
:cl_hb_alias_network[+86] grep -q hb_over_alias
:cl_hb_alias_network[+86] exit 0
:network_down_complete[+120] exit 0
Mar 24 19:22:37 EVENT COMPLETED: network_down_complete ha_db02 net_diskhb_01 0
HACMP Event Summary
Event: TE_FAIL_NETWORK
Start time: Sun Mar 24 19:22:37 2013
End time: Sun Mar 24 19:22:37 2013
Action: Resource: Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------
Mar 24 19:22:39 EVENT START: rg_move_release ha_db02 1
:rg_move_release[+54] [[ high = high ]]
:rg_move_release[+54] version=1.6
:rg_move_release[+56] set -u
:rg_move_release[+58] [ 2 != 2 ]
:rg_move_release[+64] set +u
:rg_move_release[+66] clcallev rg_move ha_db02 1 RELEASE
Mar 24 19:22:39 EVENT START: rg_move ha_db02 1 RELEASE
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=0057D5EC4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db01
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db01 ]]
:get_local_nodename[+72] print ha_db01
:get_local_nodename[+73] exit 0
:rg_move[+71] version=1.49
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=ha_db02
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=RELEASE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] egrep group =
:rg_move[+104] awk {print $3}
:rg_move[+104] eval RGNAME="ha_oracle_rg"
:rg_move[+104] RGNAME=ha_oracle_rg
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_ha_oracle_rg_ha_db01
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_ha_oracle_rg_ha_db01
:rg_move[+118] print
:rg_move[+118] export RG_MOVE_ONLINE=
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=TMP_ERROR
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] :rg_move[+136] clsetenvgrp ha_db02 rg_move ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db02 rg_move ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
:rg_move[+137] RC=0
:rg_move[+138] eval FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
:rg_move[+138] FORCEDOWN_GROUPS=
:rg_move[+138] RESOURCE_GROUPS=
:rg_move[+138] HOMELESS_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_FOLLOWER_GROUPS=
:rg_move[+138] ERRSTATE_GROUPS=
:rg_move[+138] PRINCIPAL_ACTIONS=
:rg_move[+138] ASSOCIATE_ACTIONS=
:rg_move[+138] AUXILLIARY_ACTIONS=
:rg_move[+139] set +a
:rg_move[+143] [[ 0 -ne 0 ]]
:rg_move[+143] [[ -z ha_oracle_rg ]]
:rg_move[+152] [[ -z FALSE ]]
:rg_move[+254] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:22:40.085577 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:rg_move[+292] [ -f /tmp/.NFSSTOPPED ]
:rg_move[+312] [ -f /tmp/.RPCLOCKDSTOPPED ]
:rg_move[+337] exit 0
Mar 24 19:22:40 EVENT COMPLETED: rg_move ha_db02 1 RELEASE 0
:rg_move_release[+68] exit 0
Mar 24 19:22:40 EVENT COMPLETED: rg_move_release ha_db02 1 0
Mar 24 19:23:09 EVENT START: rg_move_fence ha_db02 1
:rg_move_fence[+57] [[ high = high ]]
:rg_move_fence[+57] version=1.11
:rg_move_fence[+58] export NODENAME=ha_db02
:rg_move_fence[+60] set -u
:rg_move_fence[+62] [ 2 != 2 ]
:rg_move_fence[+68] set +u
:rg_move_fence[+70] [[ -z FALSE ]]
:rg_move_fence[+75] [[ FALSE = TRUE ]]
:rg_move_fence[+98] process_resources FENCE
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA FENCE
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa FENCE
2013-03-24T19:23:09.934619 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:rg_move_fence[+99] : exit status of process_resources FENCE is: 0
:rg_move_fence[+102] [[ FALSE = TRUE ]]
:rg_move_fence[+136] exit 0
Mar 24 19:23:09 EVENT COMPLETED: rg_move_fence ha_db02 1 0
Mar 24 19:23:10 EVENT START: rg_move_acquire ha_db02 1
:rg_move_acquire[+54] [[ high = high ]]
:rg_move_acquire[+54] version=1.9
:rg_move_acquire[+56] set -u
:rg_move_acquire[+58] [ 2 != 2 ]
:rg_move_acquire[+64] set +u
:rg_move_acquire[+67] clcallev rg_move ha_db02 1 ACQUIRE
Mar 24 19:23:10 EVENT START: rg_move ha_db02 1 ACQUIRE
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=0057D5EC4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db01
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db01 ]]
:get_local_nodename[+72] print ha_db01
:get_local_nodename[+73] exit 0
:rg_move[+71] version=1.49
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=ha_db02
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=ACQUIRE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] egrep group =
:rg_move[+104] awk {print $3}
:rg_move[+104] eval RGNAME="ha_oracle_rg"
:rg_move[+104] RGNAME=ha_oracle_rg
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_ha_oracle_rg_ha_db01
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_ha_oracle_rg_ha_db01
:rg_move[+118] print
:rg_move[+118] export RG_MOVE_ONLINE=
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=TMP_ERROR
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] :rg_move[+136] clsetenvgrp ha_db02 rg_move ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db02 rg_move ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
:rg_move[+137] RC=0
:rg_move[+138] eval FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
:rg_move[+138] FORCEDOWN_GROUPS=
:rg_move[+138] RESOURCE_GROUPS=
:rg_move[+138] HOMELESS_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_FOLLOWER_GROUPS=
:rg_move[+138] ERRSTATE_GROUPS=
:rg_move[+138] PRINCIPAL_ACTIONS=
:rg_move[+138] ASSOCIATE_ACTIONS=
:rg_move[+138] AUXILLIARY_ACTIONS=
:rg_move[+139] set +a
:rg_move[+143] [[ 0 -ne 0 ]]
:rg_move[+143] [[ -z ha_oracle_rg ]]
:rg_move[+152] [[ -z FALSE ]]
:rg_move[+254] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:23:10.348672 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:rg_move[+292] [ -f /tmp/.NFSSTOPPED ]
:rg_move[+312] [ -f /tmp/.RPCLOCKDSTOPPED ]
:rg_move[+337] exit 0
Mar 24 19:23:10 EVENT COMPLETED: rg_move ha_db02 1 ACQUIRE 0
:rg_move_acquire[+68] exit_status=0
:rg_move_acquire[+69] : exit status of clcallev rg_move ha_db02 1 ACQUIRE is: 0
:rg_move_acquire[+70] exit 0
Mar 24 19:23:10 EVENT COMPLETED: rg_move_acquire ha_db02 1 0
Mar 24 19:23:10 EVENT START: rg_move_complete ha_db02 1
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=0057D5EC4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db01
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db01 ]]
:get_local_nodename[+72] print ha_db01
:get_local_nodename[+73] exit 0
:rg_move_complete[+94] [[ high = high ]]
:rg_move_complete[+94] version=1.37
:rg_move_complete[+95] :rg_move_complete[+95] cl_get_path
HA_DIR=es
:rg_move_complete[+104] STATUS=0
:rg_move_complete[+106] [ ! -n ]
:rg_move_complete[+108] EMULATE=REAL
:rg_move_complete[+111] set -u
:rg_move_complete[+113] [ 2 -lt 2 -o 2 -gt 3 ]
:rg_move_complete[+119] export NODENAME=ha_db02
:rg_move_complete[+120] RGID=1
:rg_move_complete[+121] [ 2 -eq 3 ]
:rg_move_complete[+125] RGDESTINATION=
:rg_move_complete[+130] odmget -qid=1 HACMPgroup
:rg_move_complete[+130] egrep group =
:rg_move_complete[+130] awk {print $3}
:rg_move_complete[+130] eval RGNAME="ha_oracle_rg"
:rg_move_complete[+130] RGNAME=ha_oracle_rg
:rg_move_complete[+131] GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move_complete[+133] UPDATESTATD=0
+ha_oracle_rg:rg_move_complete[+134] NFSSTOPPED=0
+ha_oracle_rg:rg_move_complete[+138] odmget HACMPnode
+ha_oracle_rg:rg_move_complete[+138] grep name =
+ha_oracle_rg:rg_move_complete[+138] sort
+ha_oracle_rg:rg_move_complete[+138] uniq
+ha_oracle_rg:rg_move_complete[+138] wc -l
+ha_oracle_rg:rg_move_complete[+138] [ 2 -eq 2 ]
+ha_oracle_rg:rg_move_complete[+141] +ha_oracle_rg:rg_move_complete[+141] odmget HACMPgroup
+ha_oracle_rg:rg_move_complete[+141] grep group =
+ha_oracle_rg:rg_move_complete[+141] awk {print $3}
+ha_oracle_rg:rg_move_complete[+141] sed s/"//g
RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:rg_move_complete[+145] +ha_oracle_rg:rg_move_complete[+145] odmget -q group=ha_oracle_rg AND name=EXPORT_FILESYSTEM HACMPresource
+ha_oracle_rg:rg_move_complete[+145] grep value =
+ha_oracle_rg:rg_move_complete[+145] awk {print $3}
+ha_oracle_rg:rg_move_complete[+145] sed s/"//g
EXPORTLIST=
+ha_oracle_rg:rg_move_complete[+145] [[ -n ]]
+ha_oracle_rg:rg_move_complete[+170] set -a
+ha_oracle_rg:rg_move_complete[+171] +ha_oracle_rg:rg_move_complete[+171] clsetenvgrp ha_db02 rg_move_complete ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db02 rg_move_complete ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
+ha_oracle_rg:rg_move_complete[+172] RC=0
+ha_oracle_rg:rg_move_complete[+173] eval FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
+ha_oracle_rg:rg_move_complete[+173] FORCEDOWN_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] RESOURCE_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] HOMELESS_GROUPS=ha_oracle_rg
+ha_oracle_rg:rg_move_complete[+173] HOMELESS_FOLLOWER_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] ERRSTATE_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] PRINCIPAL_ACTIONS=
+ha_oracle_rg:rg_move_complete[+173] ASSOCIATE_ACTIONS=
+ha_oracle_rg:rg_move_complete[+173] AUXILLIARY_ACTIONS=
+ha_oracle_rg:rg_move_complete[+174] set +a
+ha_oracle_rg:rg_move_complete[+175] [ 0 -ne 0 ]
+ha_oracle_rg:rg_move_complete[+249] [ 0 = 1 ]
+ha_oracle_rg:rg_move_complete[+286] [ 0 = 1 ]
+ha_oracle_rg:rg_move_complete[+323] cl_RMupdate rg_error ha_oracle_rg rg_move_complete
2013-03-24T19:23:10.784350
2013-03-24T19:23:10.795986
Reference string: Sun.Mar.24.19:23:10.BEIST.2013.rg_move_complete.ha_oracle_rg.ref
+ha_oracle_rg:rg_move_complete[+324] [ 0 != 1 ]
+ha_oracle_rg:rg_move_complete[+326] STATUS=3
+ha_oracle_rg:rg_move_complete[+362] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:23:10.851740 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
+ha_oracle_rg:rg_move_complete[+363] STATUS=0
+ha_oracle_rg:rg_move_complete[+364] : exit status of process_resources is: 0
+ha_oracle_rg:rg_move_complete[+368] [[ FALSE = TRUE ]]
+ha_oracle_rg:rg_move_complete[+392] exit 0
Mar 24 19:23:10 EVENT COMPLETED: rg_move_complete ha_db02 1 0
HACMP Event Summary
Event: TE_RG_MOVE
Start time: Sun Mar 24 19:22:39 2013
End time: Sun Mar 24 19:23:10 2013
Action: Resource: Script Name:
----------------------------------------------------------------------------
Error encountered with group: ha_oracle_rg rg_move_complete
Search on: Sun.Mar.24.19:23:10.BEIST.2013.rg_move_complete.ha_oracle_rg.ref
----------------------------------------------------------------------------
:check_for_site_down[+54] [[ high = high ]]
:check_for_site_down[+54] version=1.4
:check_for_site_down[+55] :check_for_site_down[+55] cl_get_path
HA_DIR=es
:check_for_site_down[+57] STATUS=0
:check_for_site_down[+59] set +u
:check_for_site_down[+61] [ ]
:check_for_site_down[+72] exit 0
Mar 24 19:23:13 EVENT START: node_down ha_db02
:node_down[62] [[ high == high ]]
:node_down[62] version=1.66
:node_down[63] cl_get_path
:node_down[63] HA_DIR=es
:node_down[65] NODENAME=ha_db02
:node_down[65] export NODENAME
:node_down[66] PARAM=''
:node_down[66] export PARAM
:node_down[68] UPDATESTATDFILE=/usr/es/sbin/cluster/etc/updatestatd
:node_down[71] : This will be the exit status seen by the Cluster Manager.
:node_down[72] : If STATUS is not 0, the Cluster Manager will enter reconfiguration
:node_down[73] : All lower level scripts should pass status back to the caller.
:node_down[74] : This will allow a Resource Groups to be processed individaully,
:node_down[75] : independent of the status of another resource group.
:node_down[77] STATUS=0
:node_down[77] typeset -i STATUS
:node_down[79] EMULATE=REAL
:node_down[81] set -u
:node_down[83] (( 1 < 1 ))
:node_down[88] rm -f /tmp/.RPCLOCKDSTOPPED
:node_down[89] rm -f /usr/es/sbin/cluster/etc/updatestatd
:node_down[92] : For RAS debugging enhancement, the result of ps -edf is captured at this time
:node_down[94] : begin ps -edf
:node_down[95] ps -edf
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 18:51:28 - 0:00 /etc/init
root 983186 1 0 18:52:37 - 0:00 /usr/bin/topasrec -L -s 300 -R 1 -r 6 -o /etc/perf/daily/ -ypersistent=1 -O type=bin -ystart_time=18:52:37,Mar24,2013
root 1114252 1 0 18:52:07 - 0:00 /usr/lib/errdemon
root 1376420 2883604 0 18:52:39 - 0:00 /bin/ksh /pconsole/lwi/bin/lwistart_src.sh
root 1441986 1 0 18:51:37 - 0:00 /usr/ccs/bin/shlap64
root 1638500 1 0 18:51:54 - 0:00 /usr/dt/bin/dtlogin -daemon
root 1703998 7077928 0 19:23:13 - 0:00 /usr/es/sbin/cluster/events/cmd/clcallev node_down ha_db02
root 1769640 1900618 0 18:52:37 - 0:02 dtgreet
root 1835082 1638500 0 18:51:55 - 0:01 /usr/lpp/X11/bin/X -cc 4 -D /usr/lib/X11//rgb -T -force :0 -auth /var/dt/A:0-eZagqa
root 1900618 1638500 0 18:51:55 - 0:00 dtlogin <:0> -daemon
root 1966164 1 0 18:52:07 - 0:01 /usr/sbin/syncd 60
root 2687170 2883604 0 18:52:23 - 0:00 /usr/sbin/portmap
daemon 2752520 2883604 0 18:52:32 - 0:00 /usr/sbin/rpc.statd -d 0 -t 50
root 2818058 2883604 0 18:52:23 - 0:00 /usr/sbin/inetd
root 2883604 1 0 18:52:17 - 0:00 /usr/sbin/srcmstr
root 2949218 2883604 0 18:52:23 - 0:00 /usr/sbin/snmpd
root 3014756 2883604 0 18:52:23 - 0:00 /usr/sbin/hostmibd
root 3080290 2883604 0 18:52:22 - 0:00 /usr/sbin/syslogd
root 3145832 2883604 0 18:52:23 - 0:00 /usr/sbin/snmpmibd
root 3211370 2883604 0 18:52:24 - 0:00 /usr/sbin/aixmibd
root 3407980 2883604 0 18:52:23 - 0:00 sendmail: accepting connections
root 3473624 2883604 0 18:52:30 - 0:00 /usr/sbin/aso
root 3539072 1 0 18:52:33 - 0:00 /usr/sbin/uprintfd
root 3604724 2883604 0 18:52:31 - 0:00 /usr/sbin/biod 6
root 3801286 2883604 0 18:52:32 - 0:00 /usr/sbin/rpc.lockd -d 0
root 3866770 1 0 18:52:33 - 0:00 /usr/sbin/cron
root 3932290 2883604 0 18:52:33 - 0:00 /usr/sbin/qdaemon
root 3997834 2818058 0 18:52:33 - 0:00 telnetd -a
root 4063406 2883604 0 18:52:55 - 0:00 /usr/sbin/rsct/bin/IBM.ServiceRMd
root 4128938 2883604 0 18:52:33 - 0:00 /usr/sbin/writesrv
root 4194478 2883604 0 18:53:00 - 0:00 /usr/sbin/rsct/bin/IBM.AuditRMd
root 4259984 2883604 0 18:52:36 - 0:00 /usr/sbin/sshd
root 4325512 3997834 0 18:52:34 pts/0 0:00 -ksh
root 4456590 1 0 18:52:36 lft0 0:00 /usr/sbin/getty /dev/console
root 4522150 2818058 0 19:14:22 - 0:00 telnetd -a
root 4587668 1 0 18:52:36 - 0:00 /usr/lpp/diagnostics/bin/diagd
root 4653202 2883604 0 18:52:36 - 0:05 /usr/sbin/rsct/bin/rmcd -a IBM.LPCommands -r
root 4784170 9830402 0 19:21:55 pts/1 0:00 /bin/ksh93 /usr/es/sbin/cluster/utilities/clRGmove -s false -m -i -g ha_oracle_rg -n ha_db02
pconsole 4849846 1376420 0 18:52:41 - 0:00 /bin/ksh /pconsole/lwi/bin/lwistart_src.sh
pconsole 4915400 4849846 0 18:52:42 - 0:23 /usr/java5/bin/java -Xmx512m -Xms20m -Xscmx10m -Xshareclasses -Dfile.encoding=UTF-8 -Xbootclasspath/a:/pconsole/lwi/runtime/core/eclipse/plugins/com.ibm.rcp.base_6.2.1.20091117-1800/rcpbootcp.jar:/pconsole/lwi/lib/ISCJaasModule.jar:/pconsole/lwi/lib/com.ibm.logging.icl_1.1.1.jar:/pconsole/lwi/lib/jaas2zos.jar:/pconsole/lwi/lib/jaasmodule.jar:/pconsole/lwi/lib/lwinative.jar:/pconsole/lwi/lib/lwinl.jar:/pconsole/lwi/lib/lwirolemap.jar:/pconsole/lwi/lib/lwisecurity.jar:/pconsole/lwi/lib/lwitools.jar:/pconsole/lwi/lib/passutils.jar -Xverify:none -cp eclipse/launch.jar:eclipse/startup.jar:/pconsole/lwi/runtime/core/eclipse/plugins/com.ibm.rcp.base_6.2.1.20091117-1800/launcher.jar com.ibm.lwi.LaunchLWI
root 4980740 2883604 0 18:52:55 - 0:00 /usr/sbin/rsct/bin/IBM.DRMd
root 5243084 2883604 0 18:59:50 - 0:00 /usr/sbin/rsct/bin/IBM.HostRMd
root 5374144 2883604 0 18:52:43 - 0:02 /usr/es/sbin/cluster/clcomd -d
root 5439648 7536736 0 19:19:55 - 0:00 /usr/sbin/rsct/bin/hats_diskhb_nim
root 5505218 2883604 0 18:52:55 - 0:00 /usr/sbin/rsct/bin/vac8/IBM.CSMAgentRMd
root 5636296 2883604 0 18:52:55 - 0:00 /usr/sbin/rsct/bin/IBM.MgmtDomainRMd
root 5701828 7536736 0 19:19:55 - 0:00 /usr/sbin/rsct/bin/hats_nim
root 5832934 2883604 17 19:16:26 - 0:04 /usr/es/sbin/cluster/clstrmgr
root 6160420 2883604 0 19:19:56 - 0:00 /usr/sbin/gsclvmd
root 6422676 2883604 0 19:19:54 - 0:00 haemd HACMP 1 ha_oracle SECNOSUPPORT
root 7077928 5832934 1 19:20:19 - 0:00 run_rcovcmd
root 7209142 2883604 1 19:19:57 - 0:00 /usr/es/sbin/cluster/clinfo
root 7536736 2883604 0 19:19:54 - 0:00 /usr/sbin/rsct/bin/hatsd -n 1 -o deadManSwitch
root 7667876 7536736 0 19:19:55 - 0:00 /usr/sbin/rsct/bin/hats_nim
root 7930022 2883604 1 19:19:54 - 0:00 /usr/sbin/rsct/bin/hagsd grpsvcs
root 9765046 1703998 1 19:23:13 - 0:00 /bin/ksh93 /usr/es/sbin/cluster/events/node_down ha_db02
root 9830402 9896004 0 19:16:58 pts/1 0:00 /usr/bin/smitty hacmp
root 9896004 4522150 0 19:14:23 pts/1 0:00 -ksh
root 10616984 9765046 1 19:23:13 - 0:00 ps -edf
:node_down[96] : end ps -edf
:node_down[98] [[ '' == forced ]]
:node_down[120] UPDATESTATD=0
:node_down[121] export UPDATESTATD
:node_down[125] : If RG_DEPENDENCIES was set to true by the cluster manager,
:node_down[126] : then all resource group actions are taken via rg_move events.
:node_down[128] [[ FALSE == FALSE ]]
:node_down[131] : Set the RESOURCE_GROUPS environment variable with the names
:node_down[132] : of all Resource Groups participating in this event, and export
:node_down[133] : them to all successive scripts.
:node_down[135] set -a
:node_down[136] clsetenvgrp ha_db02 node_down
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db02 node_down
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
:node_down[136] eval FORCEDOWN_GROUPS='""' RESOURCE_GROUPS='""' HOMELESS_GROUPS='""' HOMELESS_FOLLOWER_GROUPS='""' ERRSTATE_GROUPS='""' PRINCIPAL_ACTIONS='""' ASSOCIATE_ACTIONS='""' AUXILLIARY_ACTIONS='""'
:node_down[1] FORCEDOWN_GROUPS=''
:node_down[1] RESOURCE_GROUPS=''
:node_down[1] HOMELESS_GROUPS=''
:node_down[1] HOMELESS_FOLLOWER_GROUPS=''
:node_down[1] ERRSTATE_GROUPS=''
:node_down[1] PRINCIPAL_ACTIONS=''
:node_down[1] ASSOCIATE_ACTIONS=''
:node_down[1] AUXILLIARY_ACTIONS=''
:node_down[137] RC=0
:node_down[138] set +a
:node_down[139] : exit status of clsetenvgrp ha_db02 node_down is: 0
:node_down[140] (( 0 != 0 ))
:node_down[145] : Process_Resources for parallel-processed resource groups
:node_down[146] : If RG_DEPENDENCIES is true, then this call is responsible for
:node_down[147] : starting the necessary rg_move events.
:node_down[149] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:23:13.252601 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:node_down[155] : if the rpc statd got updated during process_resources, we dont have to
:node_down[156] : update it again.
:node_down[158] [ -f /usr/es/sbin/cluster/etc/updatestatd ]
:node_down[164] : For each participating resource group, serially process the resources
:node_down[204] [[ REAL == EMUL ]]
:node_down[209] [[ -f /tmp/.RPCLOCKDSTOPPED ]]
:node_down[238] : Process_Resources for fencing
:node_down[240] process_resources FENCE
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA FENCE
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa FENCE
2013-03-24T19:23:13.324900 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:node_down[248] : If any volume groups were varyd on in passive mode when this node
:node_down[249] : came up, all the prior resource group processing would have left them
:node_down[250] : in passive mode. Completely vary them off at this point.
:node_down[252] [[ ha_db02 == ha_db01 ]]
:node_down[275] : For local events, delete the file required for the X25 monitor, so it stops.
:node_down[277] [[ ha_db02 == ha_db01 ]]
:node_down[305] : Perform any fencing necessary for concurrent volume groups
:node_down[307] [[ -n YES ]]
:node_down[307] [[ YES == YES ]]
:node_down[308] cl_fence_vg ha_db02
:cl_fence_vg[446] [[ high == high ]]
:cl_fence_vg[446] version=1.20
:cl_fence_vg[448] HA_DIR=es
:cl_fence_vg[450] export All_DHB_disks
:cl_fence_vg[456] NODEPV=/tmp/cl_pv.out
:cl_fence_vg[457] lspv
:cl_fence_vg[457] 1> /tmp/cl_pv.out
:cl_fence_vg[459] [[ -z ha_db01 ]]
:cl_fence_vg[468] : Accept a formal parameter of 'name of node that failed' if none were set
:cl_fence_vg[469] : in the environment
:cl_fence_vg[471] EVENTNODE=ha_db02
:cl_fence_vg[473] [[ -z ha_db02 ]]
:cl_fence_vg[482] : An explicit volume group list can be passed. Pick up any such
:cl_fence_vg[484] shift
:cl_fence_vg[485] vg_list=''
:cl_fence_vg[487] common_groups=''
:cl_fence_vg[488] common_vgs=''
:cl_fence_vg[490] [[ -z '' ]]
:cl_fence_vg[493] : Find all the concurrent resource groups that contain both ha_db02 and ha_db01
:cl_fence_vg[493] sed -n '/group = /s/^.* "\(.*\)".*/\1/p'
:cl_fence_vg[493] odmget -q 'startup_pref = OAAN' HACMPgroup
:cl_fence_vg[538] : Look at each of the resource groups in turn to determine what concurrent
:cl_fence_vg[539] : volume groups the local node ha_db01 share access with
:cl_fence_vg[540] : ha_db02
:cl_fence_vg[566] : Process the list of common volume groups,
:cl_fence_vg[583] rm /tmp/cl_pv.out
:node_down[309] : exit status of cl_fence_vg is: 0
:node_down[313] exit 0
Mar 24 19:23:13 EVENT COMPLETED: node_down ha_db02 0
Mar 24 19:23:13 EVENT START: node_down_complete ha_db02
:node_down_complete[+81] [[ high = high ]]
:node_down_complete[+81] version=1.2.3.58
:node_down_complete[+82] :node_down_complete[+82] cl_get_path
HA_DIR=es
:node_down_complete[+84] export NODENAME=ha_db02
:node_down_complete[+85] export PARAM=
:node_down_complete[+87] VSD_PROG=/usr/lpp/csd/bin/hacmp_vsd_down2
:node_down_complete[+88] HPS_PROG=/usr/es/sbin/cluster/events/utils/cl_HPS_init
:node_down_complete[+89] NODE_HALT_CONTROL_FILE=/usr/es/sbin/cluster/etc/ha_nodehalt.lock
:node_down_complete[+98] STATUS=0
:node_down_complete[+100] [ ! -n ]
:node_down_complete[+102] EMULATE=REAL
:node_down_complete[+105] set -u
:node_down_complete[+107] [ 1 -lt 1 ]
:node_down_complete[+113] [[ = forced ]]
:node_down_complete[+133] [[ FALSE = FALSE ]]
:node_down_complete[+141] set -a
:node_down_complete[+142] clsetenvgrp ha_db02 node_down_complete
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db02 node_down_complete
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
:node_down_complete[+142] eval FORCEDOWN_GROUPS="" RESOURCE_GROUPS="" HOMELESS_GROUPS="" HOMELESS_FOLLOWER_GROUPS="" ERRSTATE_GROUPS="" PRINCIPAL_ACTIONS="" ASSOCIATE_ACTIONS="" AUXILLIARY_ACTIONS=""
:node_down_complete[+142] FORCEDOWN_GROUPS= RESOURCE_GROUPS= HOMELESS_GROUPS= HOMELESS_FOLLOWER_GROUPS= ERRSTATE_GROUPS= PRINCIPAL_ACTIONS= ASSOCIATE_ACTIONS= AUXILLIARY_ACTIONS=
:node_down_complete[+143] RC=0
:node_down_complete[+144] set +a
:node_down_complete[+145] : exit status of clsetenvgrp ha_db02 node_down_complete is: 0
:node_down_complete[+146] [ 0 -ne 0 ]
:node_down_complete[+157] [[ FALSE = FALSE ]]
:node_down_complete[+159] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-24T19:23:13.665759 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:node_down_complete[+160] RC=0
:node_down_complete[+161] : exit status of process_resources is: 0
:node_down_complete[+162] [ 0 -ne 0 ]
:node_down_complete[+172] [[ != forced ]]
:node_down_complete[+172] [[ -f /usr/lpp/csd/bin/hacmp_vsd_down2 ]]
:node_down_complete[+194] :node_down_complete[+194] odmget -qnodename = ha_db01 HACMPadapter
:node_down_complete[+194] grep type
:node_down_complete[+194] grep hps
SP_SWITCH=
:node_down_complete[+196] SWITCH_TYPE=
:node_down_complete[+197] FED_TYPE=
:node_down_complete[+198] [[ -n ]]
:node_down_complete[+209] [ != forced -a -n -a -f /usr/es/sbin/cluster/events/utils/cl_HPS_init -a -z ]
:node_down_complete[+252] LOCALCOMP=N
:node_down_complete[+256] [[ FALSE = FALSE ]]
:node_down_complete[+296] [ ha_db02 = ha_db01 ]
:node_down_complete[+384] exit 0
Mar 24 19:23:13 EVENT COMPLETED: node_down_complete ha_db02 0
:check_for_site_down_complete[+54] [[ high = high ]]
:check_for_site_down_complete[+54] version=1.4
:check_for_site_down_complete[+55] :check_for_site_down_complete[+55] cl_get_path
HA_DIR=es
:check_for_site_down_complete[+57] STATUS=0
:check_for_site_down_complete[+59] set +u
:check_for_site_down_complete[+61] [ ]
:check_for_site_down_complete[+72] exit 0
HACMP Event Summary
Event: TE_FAIL_NODE
Start time: Sun Mar 24 19:23:12 2013
End time: Sun Mar 24 19:23:13 2013
Action: Resource: Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------
db02[/color]
[color=#ff0000]

Mar 25 19:21:15 EVENT START: rg_move_release ha_db01 1
:rg_move_release[+54] [[ high = high ]]
:rg_move_release[+54] version=1.6
:rg_move_release[+56] set -u
:rg_move_release[+58] [ 2 != 2 ]
:rg_move_release[+64] set +u
:rg_move_release[+66] clcallev rg_move ha_db01 1 RELEASE
Mar 25 19:21:15 EVENT START: rg_move ha_db01 1 RELEASE
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=00525B6F4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db02
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db02 ]]
:get_local_nodename[+69] [[ ha_db02 = ha_db02 ]]
:get_local_nodename[+72] print ha_db02
:get_local_nodename[+73] exit 0
:rg_move[+71] version=1.49
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=ha_db01
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=RELEASE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] egrep group =
:rg_move[+104] awk {print $3}
:rg_move[+104] eval RGNAME="ha_oracle_rg"
:rg_move[+104] RGNAME=ha_oracle_rg
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_ha_oracle_rg_ha_db02
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_ha_oracle_rg_ha_db02
:rg_move[+118] print
:rg_move[+118] export RG_MOVE_ONLINE=
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=TMP_ERROR
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] :rg_move[+136] clsetenvgrp ha_db01 rg_move ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db01 rg_move ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=NFS_ha_oracle_rg="FALSE"
IPAT_ha_oracle_rg="TRUE"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="A"
ASSOCIATE_ACTIONS="S"
AUXILLIARY_ACTIONS="N"
:rg_move[+137] RC=0
:rg_move[+138] eval NFS_ha_oracle_rg="FALSE"
IPAT_ha_oracle_rg="TRUE"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="A"
ASSOCIATE_ACTIONS="S"
AUXILLIARY_ACTIONS="N"
:rg_move[+138] NFS_ha_oracle_rg=FALSE
:rg_move[+138] IPAT_ha_oracle_rg=TRUE
:rg_move[+138] FORCEDOWN_GROUPS=
:rg_move[+138] RESOURCE_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_GROUPS=
:rg_move[+138] HOMELESS_FOLLOWER_GROUPS=
:rg_move[+138] ERRSTATE_GROUPS=
:rg_move[+138] PRINCIPAL_ACTIONS=A
:rg_move[+138] ASSOCIATE_ACTIONS=S
:rg_move[+138] AUXILLIARY_ACTIONS=N
:rg_move[+139] set +a
:rg_move[+143] [[ 0 -ne 0 ]]
:rg_move[+143] [[ -z ha_oracle_rg ]]
:rg_move[+152] [[ -z FALSE ]]
:rg_move[+198] set -a
:rg_move[+199] clsetenvres ha_oracle_rg rg_move
:rg_move[+199] eval PRINCIPAL_ACTION="ACQUIRE" ASSOCIATE_ACTION="SUSTAIN" AUXILLIARY_ACTION="NONE" VG_RR_ACTION="ACQUIRE" SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION="NONE" NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS="ha_app" FILESYSTEM="ALL" FORCED_VARYON="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" RECOVERY_METHOD="sequential" SERVICE_LABEL="ser" SSA_DISK_FENCING="false" VG_AUTO_IMPORT="false" VOLUME_GROUP="datavg"
:rg_move[+199] PRINCIPAL_ACTION=ACQUIRE ASSOCIATE_ACTION=SUSTAIN AUXILLIARY_ACTION=NONE VG_RR_ACTION=ACQUIRE SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION=NONE NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS=ha_app FILESYSTEM=ALL FORCED_VARYON=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false RECOVERY_METHOD=sequential SERVICE_LABEL=ser SSA_DISK_FENCING=false VG_AUTO_IMPORT=false VOLUME_GROUP=datavg
:rg_move[+200] set +a
:rg_move[+201] export GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move[+201] [[ RELEASE = ]]
+ha_oracle_rg:rg_move[+225] [ RELEASE = RELEASE ]
+ha_oracle_rg:rg_move[+227] [ ACQUIRE = RELEASE ]
+ha_oracle_rg:rg_move[+227] [ SUSTAIN = UMOUNT ]
+ha_oracle_rg:rg_move[+227] [ NONE = RELEASE_SECONDARY ]
+ha_oracle_rg:rg_move[+241] [ 0 -ne 0 ]
+ha_oracle_rg:rg_move[+247] UPDATESTATD=1
+ha_oracle_rg:rg_move[+254] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-25T19:21:15.370945 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] GROUPNAME=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] export GROUPNAME
+ha_oracle_rg:process_resources[2748] break
+ha_oracle_rg:process_resources[2759] : If sddsrv was turned off above, turn it back on again
+ha_oracle_rg:process_resources[2761] [[ FALSE == TRUE ]]
+ha_oracle_rg:process_resources[2767] exit 0
+ha_oracle_rg:rg_move[+292] [ -f /tmp/.NFSSTOPPED ]
+ha_oracle_rg:rg_move[+312] [ -f /tmp/.RPCLOCKDSTOPPED ]
+ha_oracle_rg:rg_move[+337] exit 0
Mar 25 19:21:15 EVENT COMPLETED: rg_move ha_db01 1 RELEASE 0
:rg_move_release[+68] exit 0
Mar 25 19:21:15 EVENT COMPLETED: rg_move_release ha_db01 1 0
Mar 25 19:21:43 EVENT START: rg_move_fence ha_db01 1
:rg_move_fence[+57] [[ high = high ]]
:rg_move_fence[+57] version=1.11
:rg_move_fence[+58] export NODENAME=ha_db01
:rg_move_fence[+60] set -u
:rg_move_fence[+62] [ 2 != 2 ]
:rg_move_fence[+68] set +u
:rg_move_fence[+70] [[ -z FALSE ]]
:rg_move_fence[+75] [[ FALSE = TRUE ]]
:rg_move_fence[+98] process_resources FENCE
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA FENCE
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa FENCE
2013-03-25T19:21:43.682835 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
:rg_move_fence[+99] : exit status of process_resources FENCE is: 0
:rg_move_fence[+102] [[ FALSE = TRUE ]]
:rg_move_fence[+136] exit 0
Mar 25 19:21:43 EVENT COMPLETED: rg_move_fence ha_db01 1 0
Mar 25 19:21:43 EVENT START: rg_move_acquire ha_db01 1
:rg_move_acquire[+54] [[ high = high ]]
:rg_move_acquire[+54] version=1.9
:rg_move_acquire[+56] set -u
:rg_move_acquire[+58] [ 2 != 2 ]
:rg_move_acquire[+64] set +u
:rg_move_acquire[+67] clcallev rg_move ha_db01 1 ACQUIRE
Mar 25 19:21:43 EVENT START: rg_move ha_db01 1 ACQUIRE
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=00525B6F4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db02
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db02 ]]
:get_local_nodename[+69] [[ ha_db02 = ha_db02 ]]
:get_local_nodename[+72] print ha_db02
:get_local_nodename[+73] exit 0
:rg_move[+71] version=1.49
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=ha_db01
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=ACQUIRE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] egrep group =
:rg_move[+104] awk {print $3}
:rg_move[+104] eval RGNAME="ha_oracle_rg"
:rg_move[+104] RGNAME=ha_oracle_rg
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_ha_oracle_rg_ha_db02
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_ha_oracle_rg_ha_db02
:rg_move[+118] print
:rg_move[+118] export RG_MOVE_ONLINE=
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=TMP_ERROR
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] :rg_move[+136] clsetenvgrp ha_db01 rg_move ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db01 rg_move ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=NFS_ha_oracle_rg="FALSE"
IPAT_ha_oracle_rg="TRUE"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="A"
ASSOCIATE_ACTIONS="S"
AUXILLIARY_ACTIONS="N"
:rg_move[+137] RC=0
:rg_move[+138] eval NFS_ha_oracle_rg="FALSE"
IPAT_ha_oracle_rg="TRUE"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="A"
ASSOCIATE_ACTIONS="S"
AUXILLIARY_ACTIONS="N"
:rg_move[+138] NFS_ha_oracle_rg=FALSE
:rg_move[+138] IPAT_ha_oracle_rg=TRUE
:rg_move[+138] FORCEDOWN_GROUPS=
:rg_move[+138] RESOURCE_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_GROUPS=
:rg_move[+138] HOMELESS_FOLLOWER_GROUPS=
:rg_move[+138] ERRSTATE_GROUPS=
:rg_move[+138] PRINCIPAL_ACTIONS=A
:rg_move[+138] ASSOCIATE_ACTIONS=S
:rg_move[+138] AUXILLIARY_ACTIONS=N
:rg_move[+139] set +a
:rg_move[+143] [[ 0 -ne 0 ]]
:rg_move[+143] [[ -z ha_oracle_rg ]]
:rg_move[+152] [[ -z FALSE ]]
:rg_move[+198] set -a
:rg_move[+199] clsetenvres ha_oracle_rg rg_move
:rg_move[+199] eval PRINCIPAL_ACTION="ACQUIRE" ASSOCIATE_ACTION="SUSTAIN" AUXILLIARY_ACTION="NONE" VG_RR_ACTION="ACQUIRE" SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION="NONE" NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS="ha_app" FILESYSTEM="ALL" FORCED_VARYON="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" RECOVERY_METHOD="sequential" SERVICE_LABEL="ser" SSA_DISK_FENCING="false" VG_AUTO_IMPORT="false" VOLUME_GROUP="datavg"
:rg_move[+199] PRINCIPAL_ACTION=ACQUIRE ASSOCIATE_ACTION=SUSTAIN AUXILLIARY_ACTION=NONE VG_RR_ACTION=ACQUIRE SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION=NONE NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS=ha_app FILESYSTEM=ALL FORCED_VARYON=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false RECOVERY_METHOD=sequential SERVICE_LABEL=ser SSA_DISK_FENCING=false VG_AUTO_IMPORT=false VOLUME_GROUP=datavg
:rg_move[+200] set +a
:rg_move[+201] export GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move[+201] [[ ACQUIRE = ]]
+ha_oracle_rg:rg_move[+225] [ ACQUIRE = RELEASE ]
+ha_oracle_rg:rg_move[+231] [ ACQUIRE = ACQUIRE ]
+ha_oracle_rg:rg_move[+233] [ ACQUIRE = ACQUIRE ]
+ha_oracle_rg:rg_move[+235] MOUNT_FILESYSTEM=
+ha_oracle_rg:rg_move[+236] clcallev node_up_local
Mar 25 19:21:44 EVENT START: node_up_local
+ha_oracle_rg:node_up_local[185] [[ high == high ]]
+ha_oracle_rg:node_up_local[185] version=1.2.1.96
+ha_oracle_rg:node_up_local[187] STATUS=0
+ha_oracle_rg:node_up_local[187] typeset -i STATUS
+ha_oracle_rg:node_up_local[188] CROSSMOUNT=0
+ha_oracle_rg:node_up_local[188] typeset -i CROSSMOUNT
+ha_oracle_rg:node_up_local[189] REP_RES_FATAL=0
+ha_oracle_rg:node_up_local[189] typeset -i REP_RES_FATAL
+ha_oracle_rg:node_up_local[190] SKIPBRKRES=0
+ha_oracle_rg:node_up_local[190] typeset -i SKIPBRKRES
+ha_oracle_rg:node_up_local[191] SKIPVARYON=0
+ha_oracle_rg:node_up_local[191] typeset -i SKIPVARYON
+ha_oracle_rg:node_up_local[192] ROOT=''
+ha_oracle_rg:node_up_local[193] sddsrv_off=FALSE
+ha_oracle_rg:node_up_local[195] (( 0 != 0 ))
+ha_oracle_rg:node_up_local[201] set -u
+ha_oracle_rg:node_up_local[204] : First, indicate that the resource is in the process of coming up by placing
+ha_oracle_rg:node_up_local[205] : it into state ACQUIRING. This will persist until the resource comes
+ha_oracle_rg:node_up_local[206] : completely up or there is an error.
+ha_oracle_rg:node_up_local[208] set_resource_status ACQUIRING
+ha_oracle_rg:node_up_local[132] PS4_FUNC=set_resource_status
+ha_oracle_rg:node_up_local[132] typeset PS4_FUNC
+ha_oracle_rg:node_up_local[133] [[ high == high ]]
+ha_oracle_rg:node_up_local[133] set -x
+ha_oracle_rg:node_up_local[135] set +u
+ha_oracle_rg:node_up_local[136] eval TEMPNFS='$NFS_ha_oracle_rg'
+ha_oracle_rg:node_up_local[1] TEMPNFS=FALSE
+ha_oracle_rg:node_up_local[137] set -u
+ha_oracle_rg:node_up_local[139] [[ -z FALSE ]]
+ha_oracle_rg:node_up_local[139] [[ FALSE == FALSE ]]
+ha_oracle_rg:node_up_local[141] clchdaemons -d clstrmgr_scripts -t resource_locator -n ha_db02 -o ha_oracle_rg -v ACQUIRING
+ha_oracle_rg:node_up_local[148] : Resource Manager Updates
+ha_oracle_rg:node_up_local[150] [[ ACQUIRING == ACQUIRING ]]
+ha_oracle_rg:node_up_local[152] [[ NONE == ACQUIRE_SECONDARY ]]
+ha_oracle_rg:node_up_local[155] [[ NONE == PRIMARY_BECOMES_SECONDARY ]]
+ha_oracle_rg:node_up_local[159] cl_RMupdate acquiring ha_oracle_rg node_up_local
2013-03-25T19:21:44.325784
2013-03-25T19:21:44.339848
Reference string: Mon.Mar.25.19:21:44.BEIST.2013.node_up_local.ha_oracle_rg.ref
+ha_oracle_rg:node_up_local[172] (( 0 != 0 ))
+ha_oracle_rg:node_up_local[210] export CROSSMOUNT
+ha_oracle_rg:node_up_local[211] [[ -n '' ]]
+ha_oracle_rg:node_up_local[219] [[ ACQUIRE == ACQUIRE ]]
+ha_oracle_rg:node_up_local[222] : Call replicated resource set_primary method associated
+ha_oracle_rg:node_up_local[223] : with any replicated resource defined in the resource group
+ha_oracle_rg:node_up_local[224] : we arecurrently processing.
+ha_oracle_rg:node_up_local[226] call_replicated_methods set_primary ''
call_replicated_methods: CALLED arguments set_primary
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+5] TYPE=set_primary
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+6] NODENAME=
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+7] GROUPS=
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+7] [[ -z set_primary ]]
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+16] rc=0
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+16] [[ -z ]]
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+20] +ha_oracle_rg:call_replicated_methods[call_replicated_methods+20] cl_rrmethods2call set_primary
+ha_oracle_rg:cl_rrmethods2call[+49] [[ high = high ]]
+ha_oracle_rg:cl_rrmethods2call[+49] version=1.17
+ha_oracle_rg:cl_rrmethods2call[+50] +ha_oracle_rg:cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_rrmethods2call[+76] RRMETHODS=
+ha_oracle_rg:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
+ha_oracle_rg:cl_rrmethods2call[+114] [[ no = yes ]]
+ha_oracle_rg:cl_rrmethods2call[+120] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+125] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+130] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+135] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+140] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+145] echo
+ha_oracle_rg:cl_rrmethods2call[+146] exit 0
METHODS=
+ha_oracle_rg:call_replicated_methods[call_replicated_methods+53] return 0
+ha_oracle_rg:node_up_local[227] STATUS=0
+ha_oracle_rg:node_up_local[228] : exit status of call_replicated_methods is: 0
+ha_oracle_rg:node_up_local[232] : Start the WPAR. Due to the fact that WPAR enablement/disablement is
+ha_oracle_rg:node_up_local[233] : done in a lazy fashion, the actual state of WPAR activity will not
+ha_oracle_rg:node_up_local[234] : necessarily match our ODM state. Consequently, we $'can\'t' simply look
+ha_oracle_rg:node_up_local[235] : at the WPAR_NAME environment variable.
+ha_oracle_rg:node_up_local[237] : The command clstart_wpar will check if the resource group actually has
+ha_oracle_rg:node_up_local[238] : a WPAR so we $'don\'t' need to check for that here.
+ha_oracle_rg:node_up_local[240] clstart_wpar
+ha_oracle_rg:clstart_wpar[167] [[ high == high ]]
+ha_oracle_rg:clstart_wpar[167] version=1.10
+ha_oracle_rg:clstart_wpar[170] uname
+ha_oracle_rg:clstart_wpar[170] OSNAME=AIX
+ha_oracle_rg:clstart_wpar[179] [[ AIX == *AIX* ]]
+ha_oracle_rg:clstart_wpar[180] lslpp -l bos.wpars
+ha_oracle_rg:clstart_wpar[180] 1> /dev/null 2>& 1
+ha_oracle_rg:clstart_wpar[183] [[ node_up_local == reconfig_resource_acquire ]]
+ha_oracle_rg:clstart_wpar[190] . /usr/es/sbin/cluster/wpar/wpar_utils
+ha_oracle_rg:clstart_wpar[+20] ERRNO=0
+ha_oracle_rg:clstart_wpar[+22] [[ high == high ]]
+ha_oracle_rg:clstart_wpar[+22] set -x
+ha_oracle_rg:clstart_wpar[+23] [[ high == high ]]
+ha_oracle_rg:clstart_wpar[+23] version=1.10
+ha_oracle_rg:clstart_wpar[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+ha_oracle_rg:clstart_wpar[+20] [[ high == high ]]
+ha_oracle_rg:clstart_wpar[+20] set -x
+ha_oracle_rg:clstart_wpar[+21] [[ high == high ]]
+ha_oracle_rg:clstart_wpar[+21] version=1.4
+ha_oracle_rg:clstart_wpar[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+ha_oracle_rg:clstart_wpar[+24] export PATH
+ha_oracle_rg:clstart_wpar[+26] typeset usageErr invalArgErr internalErr
+ha_oracle_rg:clstart_wpar[+28] usageErr=10
+ha_oracle_rg:clstart_wpar[+29] invalArgErr=11
+ha_oracle_rg:clstart_wpar[+30] internalErr=12
+ha_oracle_rg:clstart_wpar[+192] typeset wparName state
+ha_oracle_rg:clstart_wpar[+193] typeset -i result
+ha_oracle_rg:clstart_wpar[+195] loadWparName ha_oracle_rg
+ha_oracle_rg:clstart_wpar[loadWparName+5] usage='Usage: loadWparName <resource group name>'
+ha_oracle_rg:clstart_wpar[loadWparName+5] typeset -r usage
+ha_oracle_rg:clstart_wpar[loadWparName+6] typeset rgName wparName wparDir rc
+ha_oracle_rg:clstart_wpar[loadWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clstart_wpar[loadWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clstart_wpar[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clstart_wpar[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clstart_wpar[loadWparName+22] [[ -f /var/hacmp/adm/wpar/ha_oracle_rg ]]
+ha_oracle_rg:clstart_wpar[loadWparName+41] criteria='group = ha_oracle_rg and name = WPAR_NAME'
+ha_oracle_rg:clstart_wpar[loadWparName+41] typeset criteria
+ha_oracle_rg:clstart_wpar[loadWparName+42] : Getting the value from the ODM
+ha_oracle_rg:clstart_wpar[loadWparName+44] [[ -z /usr/es/sbin/cluster/etc/objrepos/active ]]
+ha_oracle_rg:clstart_wpar[loadWparName+44] [[ ! -f /usr/es/sbin/cluster/etc/objrepos/active/HACMPresource ]]
+ha_oracle_rg:clstart_wpar[loadWparName+49] grep 'value ='
+ha_oracle_rg:clstart_wpar[loadWparName+50] cut '-d"' -f2
+ha_oracle_rg:clstart_wpar[loadWparName+49] odmget -q 'group = ha_oracle_rg and name = WPAR_NAME' HACMPresource
+ha_oracle_rg:clstart_wpar[loadWparName+49] WPAR_NAME=''
+ha_oracle_rg:clstart_wpar[loadWparName+49] export WPAR_NAME
+ha_oracle_rg:clstart_wpar[loadWparName+51] rc=0
+ha_oracle_rg:clstart_wpar[loadWparName+52] [[ 0 != 0 ]]
+ha_oracle_rg:clstart_wpar[loadWparName+57] printf %s
+ha_oracle_rg:clstart_wpar[loadWparName+58] return
+ha_oracle_rg:clstart_wpar[+195] wparName=''
+ha_oracle_rg:clstart_wpar[+197] [[ -z '' ]]
+ha_oracle_rg:clstart_wpar[+198] saveWparName ha_oracle_rg
+ha_oracle_rg:clstart_wpar[saveWparName+5] usage='Usage: saveWparName <resource group name> [wpar name]'
+ha_oracle_rg:clstart_wpar[saveWparName+5] typeset -r usage
+ha_oracle_rg:clstart_wpar[saveWparName+6] typeset rgName wparName wparDir
+ha_oracle_rg:clstart_wpar[saveWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clstart_wpar[saveWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clstart_wpar[saveWparName+14] wparName=''
+ha_oracle_rg:clstart_wpar[saveWparName+15] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clstart_wpar[saveWparName+17] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clstart_wpar[saveWparName+19] printf %s ''
+ha_oracle_rg:clstart_wpar[saveWparName+19] 1> /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clstart_wpar[+199] exit 0
+ha_oracle_rg:node_up_local[241] RC=0
+ha_oracle_rg:node_up_local[242] : exit status of clstart_wpar is: 0
+ha_oracle_rg:node_up_local[244] (( 0 != 0 ))
+ha_oracle_rg:node_up_local[251] : Mount filesystems, varyon volume groups, make disks available,
+ha_oracle_rg:node_up_local[252] : and export filesystems if FS_BEFORE_IPADDR is true i.e acquire
+ha_oracle_rg:node_up_local[253] : FS before acquiring IPaddr. This removes the error of 'Missing Filesystem'
+ha_oracle_rg:node_up_local[255] [[ false == true ]]
+ha_oracle_rg:node_up_local[263] : Acquire service address on boot adapter if we are the highest
+ha_oracle_rg:node_up_local[264] : priority node for that resource. Determined by environment
+ha_oracle_rg:node_up_local[265] : variables.
+ha_oracle_rg:node_up_local[267] [[ -n ser ]]
+ha_oracle_rg:node_up_local[269] clcallev acquire_service_addr ser
Mar 25 19:21:44 EVENT START: acquire_service_addr ser
+ha_oracle_rg:acquire_service_addr[+520] [[ high = high ]]
+ha_oracle_rg:acquire_service_addr[+520] version=1.68
+ha_oracle_rg:acquire_service_addr[+521] +ha_oracle_rg:acquire_service_addr[+521] cl_get_path
HA_DIR=es
+ha_oracle_rg:acquire_service_addr[+522] +ha_oracle_rg:acquire_service_addr[+522] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:acquire_service_addr[+524] ACQUIRE_SNA_RUN=false
+ha_oracle_rg:acquire_service_addr[+526] TELINIT=false
+ha_oracle_rg:acquire_service_addr[+527] TELINIT_FILE=/usr/es/sbin/cluster/.telinit
+ha_oracle_rg:acquire_service_addr[+529] typeset -i telinit_wait_count=36
+ha_oracle_rg:acquire_service_addr[+532] DELAY=5
+ha_oracle_rg:acquire_service_addr[+536] STATUS=0
+ha_oracle_rg:acquire_service_addr[+539] [ ! -n ]
+ha_oracle_rg:acquire_service_addr[+541] EMULATE=REAL
+ha_oracle_rg:acquire_service_addr[+545] PROC_RES=false
+ha_oracle_rg:acquire_service_addr[+549] [[ 0 != 0 ]]
+ha_oracle_rg:acquire_service_addr[+556] [ 1 -le 0 ]
+ha_oracle_rg:acquire_service_addr[+562] export RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:acquire_service_addr[+568] saveNSORDER=UNDEFINED
+ha_oracle_rg:acquire_service_addr[+569] NSORDER=local
+ha_oracle_rg:acquire_service_addr[+569] export NSORDER
+ha_oracle_rg:acquire_service_addr[+575] export GROUPNAME
+ha_oracle_rg:acquire_service_addr[+575] [[ false = true ]]
+ha_oracle_rg:acquire_service_addr[+585] SERVICELABELS=ser
+ha_oracle_rg:acquire_service_addr[+585] [[ -n ]]
+ha_oracle_rg:acquire_service_addr[+604] ALLSRVADDRS=All_service_addrs
+ha_oracle_rg:acquire_service_addr[+605] ALLNOERRSRV=All_nonerror_service_addrs
+ha_oracle_rg:acquire_service_addr[+606] [ REAL = EMUL ]
+ha_oracle_rg:acquire_service_addr[+611] cl_RMupdate resource_acquiring All_service_addrs acquire_service_addr
2013-03-25T19:21:44.624163
2013-03-25T19:21:44.638846
Reference string: Mon.Mar.25.19:21:44.BEIST.2013.acquire_service_addr.All_service_addrs.ha_oracle_rg.ref
+ha_oracle_rg:acquire_service_addr[+620] clgetif -a ser
+ha_oracle_rg:acquire_service_addr[+620] 2> /dev/null
+ha_oracle_rg:acquire_service_addr[+621] [ 3 -ne 0 ]
+ha_oracle_rg:acquire_service_addr[+626] +ha_oracle_rg:acquire_service_addr[+626] name_to_addr ser
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] cllsif -J ~ -Sn ser
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] cut -d~ -f7
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] uniq
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] echo 192.168.4.13
+ha_oracle_rg:acquire_service_addr[name_to_addr+4] exit 0
service_textual_addr=192.168.4.13
+ha_oracle_rg:acquire_service_addr[+627] +ha_oracle_rg:acquire_service_addr[+627] cllsif -J ~ -Sn 192.168.4.13
+ha_oracle_rg:acquire_service_addr[+627] cut -d~ -f3
+ha_oracle_rg:acquire_service_addr[+627] uniq
NETWORK=net_ether_01
+ha_oracle_rg:acquire_service_addr[+634] +ha_oracle_rg:acquire_service_addr[+634] cllsif -J ~ -Si ha_db02
+ha_oracle_rg:acquire_service_addr[+634] awk -F~ -v NET=net_ether_01 {if ($2 == "boot" && $3 == NET) print $1}
+ha_oracle_rg:acquire_service_addr[+634] sort
boot=db02_boot1
db02_boot2
+ha_oracle_rg:acquire_service_addr[+636] [ -z db02_boot1
db02_boot2 ]
+ha_oracle_rg:acquire_service_addr[+652] +ha_oracle_rg:acquire_service_addr[+652] odmget -q name=net_ether_01 HACMPnetwork
+ha_oracle_rg:acquire_service_addr[+652] awk /alias = / {print $3}
+ha_oracle_rg:acquire_service_addr[+652] sed s/"//g
ALIAS=1
+ha_oracle_rg:acquire_service_addr[+652] [[ 1 = 1 ]]
+ha_oracle_rg:acquire_service_addr[+655] +ha_oracle_rg:acquire_service_addr[+655] best_boot_addr net_ether_01 db02_boot1 db02_boot2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+3] NETWORK=net_ether_01
+ha_oracle_rg:acquire_service_addr[best_boot_addr+4] shift
+ha_oracle_rg:acquire_service_addr[best_boot_addr+5] candidate_boots=db02_boot1 db02_boot2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+9] echo db02_boot1 db02_boot2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+9] tr \n
+ha_oracle_rg:acquire_service_addr[best_boot_addr+9] wc -l
+ha_oracle_rg:acquire_service_addr[best_boot_addr+9] let num_candidates= 2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+9] [[ 2 -eq 1 ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+15] best_candidate=NONE
+ha_oracle_rg:acquire_service_addr[best_boot_addr+16] best_aliases=0
+ha_oracle_rg:acquire_service_addr[best_boot_addr+17] ip_family=
+ha_oracle_rg:acquire_service_addr[best_boot_addr+21] +ha_oracle_rg:acquire_service_addr[best_boot_addr+21] odmget -qnetwork = net_ether_01 AND function = shared HACMPadapter
+ha_oracle_rg:acquire_service_addr[best_boot_addr+21] grep max_aliases
+ha_oracle_rg:acquire_service_addr[best_boot_addr+21] awk {print $3}
+ha_oracle_rg:acquire_service_addr[best_boot_addr+21] sort -u
MA=0
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] cllsif -J ~ -Sn db02_boot1
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] LC_ALL=C
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] cut -f7,11 -d~
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] tr ~
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] read candidate_dot_addr junk
+ha_oracle_rg:acquire_service_addr[best_boot_addr+254] : Find the address family
+ha_oracle_rg:acquire_service_addr[best_boot_addr+256] ip_family=
+ha_oracle_rg:acquire_service_addr[best_boot_addr+257] +ha_oracle_rg:acquire_service_addr[best_boot_addr+257] get_inet_family 192.168.2.12
+ha_oracle_rg:acquire_service_addr[get_inet_family+3] ip_label=192.168.2.12
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] +ha_oracle_rg:acquire_service_addr[get_inet_family+4] cllsif -J ~ -Sn 192.168.2.12
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] awk -F~ {print $15}
inet_family=AF_INET
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] [[ AF_INET = AF_INET ]]
+ha_oracle_rg:acquire_service_addr[get_inet_family+7] echo inet
+ha_oracle_rg:acquire_service_addr[get_inet_family+8] return
ip_family=inet
+ha_oracle_rg:acquire_service_addr[best_boot_addr+257] [[ inet == inet ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+261] +ha_oracle_rg:acquire_service_addr[best_boot_addr+261] print 192.168.2.12
+ha_oracle_rg:acquire_service_addr[best_boot_addr+261] tr ./ xx
addr=$i192x168x2x12_ha_db02
+ha_oracle_rg:acquire_service_addr[best_boot_addr+261] [[ inet == inet6 ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+267] +ha_oracle_rg:acquire_service_addr[best_boot_addr+267] eval print $i192x168x2x12_ha_db02
+ha_oracle_rg:acquire_service_addr[best_boot_addr+267] print UP
candidate_state=UP
+ha_oracle_rg:acquire_service_addr[best_boot_addr+267] [[ UP = UP ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] +ha_oracle_rg:acquire_service_addr[best_boot_addr+273] clgetif -a 192.168.2.12
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] 2> /dev/null
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] awk {print $1}
candidate_if=en1
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] +ha_oracle_rg:acquire_service_addr[best_boot_addr+275] ifconfig en1
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] LC_ALL=C
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] grep -c -w inet[6]\{0,1\}
candidate_aliases=1
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] [[ NONE = NONE ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+281] best_candidate=db02_boot1
+ha_oracle_rg:acquire_service_addr[best_boot_addr+282] best_aliases=1
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] cllsif -J ~ -Sn db02_boot2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] LC_ALL=C
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] cut -f7,11 -d~
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] tr ~
+ha_oracle_rg:acquire_service_addr[best_boot_addr+251] read candidate_dot_addr junk
+ha_oracle_rg:acquire_service_addr[best_boot_addr+254] : Find the address family
+ha_oracle_rg:acquire_service_addr[best_boot_addr+256] ip_family=
+ha_oracle_rg:acquire_service_addr[best_boot_addr+257] +ha_oracle_rg:acquire_service_addr[best_boot_addr+257] get_inet_family 192.168.3.12
+ha_oracle_rg:acquire_service_addr[get_inet_family+3] ip_label=192.168.3.12
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] +ha_oracle_rg:acquire_service_addr[get_inet_family+4] cllsif -J ~ -Sn 192.168.3.12
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] awk -F~ {print $15}
inet_family=AF_INET
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] [[ AF_INET = AF_INET ]]
+ha_oracle_rg:acquire_service_addr[get_inet_family+7] echo inet
+ha_oracle_rg:acquire_service_addr[get_inet_family+8] return
ip_family=inet
+ha_oracle_rg:acquire_service_addr[best_boot_addr+257] [[ inet == inet ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+261] +ha_oracle_rg:acquire_service_addr[best_boot_addr+261] tr ./ xx
+ha_oracle_rg:acquire_service_addr[best_boot_addr+261] print 192.168.3.12
addr=$i192x168x3x12_ha_db02
+ha_oracle_rg:acquire_service_addr[best_boot_addr+261] [[ inet == inet6 ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+267] +ha_oracle_rg:acquire_service_addr[best_boot_addr+267] eval print $i192x168x3x12_ha_db02
+ha_oracle_rg:acquire_service_addr[best_boot_addr+267] print UP
candidate_state=UP
+ha_oracle_rg:acquire_service_addr[best_boot_addr+267] [[ UP = UP ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] +ha_oracle_rg:acquire_service_addr[best_boot_addr+273] clgetif -a 192.168.3.12
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] 2> /dev/null
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] awk {print $1}
candidate_if=en2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+273] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] +ha_oracle_rg:acquire_service_addr[best_boot_addr+275] ifconfig en2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] LC_ALL=C
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] grep -c -w inet[6]\{0,1\}
candidate_aliases=2
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] [[ db02_boot1 = NONE ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] [[ 2 -lt 1 ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+275] [[ db02_boot1 != NONE ]]
+ha_oracle_rg:acquire_service_addr[best_boot_addr+291] echo db02_boot1
+ha_oracle_rg:acquire_service_addr[best_boot_addr+292] return
boot_addr=db02_boot1
+ha_oracle_rg:acquire_service_addr[+657] [ 0 -ne 0 ]
+ha_oracle_rg:acquire_service_addr[+669] +ha_oracle_rg:acquire_service_addr[+669] name_to_addr db02_boot1
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] cllsif -J ~ -Sn db02_boot1
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] cut -d~ -f7
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] uniq
+ha_oracle_rg:acquire_service_addr[name_to_addr+3] echo 192.168.2.12
+ha_oracle_rg:acquire_service_addr[name_to_addr+4] exit 0
boot_dot_addr=192.168.2.12
+ha_oracle_rg:acquire_service_addr[+674] +ha_oracle_rg:acquire_service_addr[+674] awk {print $1}
+ha_oracle_rg:acquire_service_addr[+674] clgetif -a 192.168.2.12
+ha_oracle_rg:acquire_service_addr[+674] 2> /dev/null
INTERFACE=en1
+ha_oracle_rg:acquire_service_addr[+674] [[ -z en1 ]]
+ha_oracle_rg:acquire_service_addr[+699] SNA_LAN_LINKS=
+ha_oracle_rg:acquire_service_addr[+708] SNA_CONNECTIONS=
+ha_oracle_rg:acquire_service_addr[+710] [ -n en1 ]
+ha_oracle_rg:acquire_service_addr[+710] [[ -n ]]
+ha_oracle_rg:acquire_service_addr[+725] +ha_oracle_rg:acquire_service_addr[+725] odmget -q identifier=192.168.4.13 HACMPadapter
+ha_oracle_rg:acquire_service_addr[+725] grep netmask
+ha_oracle_rg:acquire_service_addr[+725] sed s/"//g
+ha_oracle_rg:acquire_service_addr[+725] cut -f2 -d =
NETMASK= 255.255.255.0
+ha_oracle_rg:acquire_service_addr[+727] [ REAL = EMUL ]
+ha_oracle_rg:acquire_service_addr[+732] +ha_oracle_rg:acquire_service_addr[+732] get_inet_family 192.168.4.13
+ha_oracle_rg:acquire_service_addr[get_inet_family+3] ip_label=192.168.4.13
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] +ha_oracle_rg:acquire_service_addr[get_inet_family+4] awk -F~ {print $15}
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] cllsif -J ~ -Sn 192.168.4.13
inet_family=AF_INET
+ha_oracle_rg:acquire_service_addr[get_inet_family+4] [[ AF_INET = AF_INET ]]
+ha_oracle_rg:acquire_service_addr[get_inet_family+7] echo inet
+ha_oracle_rg:acquire_service_addr[get_inet_family+8] return
INET_FAMILY=inet
+ha_oracle_rg:acquire_service_addr[+732] [[ inet = inet6 ]]
+ha_oracle_rg:acquire_service_addr[+737] cl_swap_IP_address rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[+1115] [[ high = high ]]
+ha_oracle_rg:cl_swap_IP_address[+1115] version=1.9.1.110
+ha_oracle_rg:cl_swap_IP_address[+1116] +ha_oracle_rg:cl_swap_IP_address[+1116] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_swap_IP_address[+1117] +ha_oracle_rg:cl_swap_IP_address[+1117] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:cl_swap_IP_address[+1118] export LC_ALL=C
+ha_oracle_rg:cl_swap_IP_address[+1119] RESTORE_ROUTES=/usr/es/sbin/cluster/.restore_routes
+ha_oracle_rg:cl_swap_IP_address[+1123] cldomain
+ha_oracle_rg:cl_swap_IP_address[+1123] export HA_DOMAIN_NAME=ha_oracle
+ha_oracle_rg:cl_swap_IP_address[+1124] export HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+ha_oracle_rg:cl_swap_IP_address[+1125] BINDIR=/usr/sbin/rsct/bin
+ha_oracle_rg:cl_swap_IP_address[+1128] +ha_oracle_rg:cl_swap_IP_address[+1128] clmixver
MIXVER=11
+ha_oracle_rg:cl_swap_IP_address[+1129] MIXVERRC=0
+ha_oracle_rg:cl_swap_IP_address[+1131] cl_echo 33 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 25 2013 19:21:45
Mar 25 2013 19:21:45 +ha_oracle_rg:cl_echo[+96] MSG_ID=33
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 33 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0
+ha_oracle_rg:cl_echo[+98] 1>& 2
Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_swap_IP_address[+1132] date
Mon Mar 25 19:21:45 BEIST 2013
+ha_oracle_rg:cl_swap_IP_address[+1138] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1138] [[ 6 -eq 6 ]]
+ha_oracle_rg:cl_swap_IP_address[+1140] non_alias 192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[+1140] [[ 0 -eq 1 ]]
+ha_oracle_rg:cl_swap_IP_address[+1148] set_ipignoreredirects
Setting ipignoreredirects to 1
+ha_oracle_rg:cl_swap_IP_address[+1151] PROC_RES=false
+ha_oracle_rg:cl_swap_IP_address[+1155] [[ 0 != 0 ]]
+ha_oracle_rg:cl_swap_IP_address[+1159] set -u
+ha_oracle_rg:cl_swap_IP_address[+1161] ATM_IF_TYPE=
+ha_oracle_rg:cl_swap_IP_address[+1162] STANDBY_ROUTES=
+ha_oracle_rg:cl_swap_IP_address[+1163] NEED_ROUTE_PRESERVATION=0
+ha_oracle_rg:cl_swap_IP_address[+1170] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en1 1500 link#2 0.1a.64.93.a3.83 11949 0 8500 4 0
en1 1500 192.168.2 192.168.2.12 11949 0 8500 4 0
en2 1500 link#3 0.11.25.bf.95.52 6775 0 5000 3 0
en2 1500 192.168.3 192.168.3.12 6775 0 5000 3 0
en2 1500 192.168.4 192.168.4.12 6775 0 5000 3 0
lo0 16896 link#1 19045 0 19045 0 0
lo0 16896 127 127.0.0.1 19045 0 19045 0 0
lo0 16896 ::1%1 19045 0 19045 0 0
+ha_oracle_rg:cl_swap_IP_address[+1171] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
127/8 127.0.0.1 U 1 - lo0 0 0
192.168.2.0 192.168.2.12 UHSb 1 - en1 0 0 =>
192.168.2/24 192.168.2.12 U 1 - en1 0 0
192.168.2.12 127.0.0.1 UGHS 1 - lo0 0 0
192.168.2.255 192.168.2.12 UHSb 1 - en1 0 0
192.168.3.0 192.168.3.12 UHSb 1 - en2 0 0 =>
192.168.3/24 192.168.3.12 U 1 - en2 0 0
192.168.3.12 127.0.0.1 UGHS 1 - lo0 0 0
192.168.3.255 192.168.3.12 UHSb 1 - en2 0 0
192.168.4.0 192.168.4.12 UHSb 1 WRR en2 0 0 =>
192.168.4/24 192.168.4.12 U 1 WRR en2 0 0
192.168.4.12 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.255 192.168.4.12 UHSb 1 WRR en2 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+ha_oracle_rg:cl_swap_IP_address[+1172] CASC_OR_ROT=rotating
+ha_oracle_rg:cl_swap_IP_address[+1173] ACQ_OR_RLSE=acquire
+ha_oracle_rg:cl_swap_IP_address[+1174] IF=en1
+ha_oracle_rg:cl_swap_IP_address[+1175] ADDR=192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[+1176] OLD_ADDR=192.168.2.12
+ha_oracle_rg:cl_swap_IP_address[+1177] NETMASK=255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[+1178] +ha_oracle_rg:cl_swap_IP_address[+1178] get_MTU en1
+ha_oracle_rg:cl_swap_IP_address[get_MTU+5] IF=en1
+ha_oracle_rg:cl_swap_IP_address[get_MTU+7] MTUSIZE=
+ha_oracle_rg:cl_swap_IP_address[get_MTU+7] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[get_MTU+9] +ha_oracle_rg:cl_swap_IP_address[get_MTU+9] awk NR == 2 {print $2}
+ha_oracle_rg:cl_swap_IP_address[get_MTU+9] netstat -nI en1
MTUSIZE=1500
+ha_oracle_rg:cl_swap_IP_address[get_MTU+15] print 1500
MTUSIZE=1500
+ha_oracle_rg:cl_swap_IP_address[+1178] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1178] [[ rotating = cascading ]]
+ha_oracle_rg:cl_swap_IP_address[+1197] +ha_oracle_rg:cl_swap_IP_address[+1197] check_ATM_interface en1
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+5] if_name=en1
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+12] entstat -d en1
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+12] 2>& 1
+ha_oracle_rg:cl_swap_IP_address[check_ATM_interface+12] awk /Device Type: ATM LAN Emulation/ {print "atmle_ent";exit}
ATM_IF_TYPE=
+ha_oracle_rg:cl_swap_IP_address[+1197] [[ = atm ]]
+ha_oracle_rg:cl_swap_IP_address[+1211] +ha_oracle_rg:cl_swap_IP_address[+1211] cut -f3 -d~
+ha_oracle_rg:cl_swap_IP_address[+1211] cllsif -J ~ -Sw -n 192.168.4.13
NET=net_ether_01
+ha_oracle_rg:cl_swap_IP_address[+1212] +ha_oracle_rg:cl_swap_IP_address[+1212] cut -f3 -d~
+ha_oracle_rg:cl_swap_IP_address[+1212] cllsnw -J ~ -Sw -n net_ether_01
ALIAS=true
+ha_oracle_rg:cl_swap_IP_address[+1213] +ha_oracle_rg:cl_swap_IP_address[+1213] cut -f4 -d~
+ha_oracle_rg:cl_swap_IP_address[+1213] cllsif -J ~ -Sw -n 192.168.4.13
NET_TYPE=ether
+ha_oracle_rg:cl_swap_IP_address[+1213] [[ ether = hps ]]
+ha_oracle_rg:cl_swap_IP_address[+1213] [[ true = true ]]
+ha_oracle_rg:cl_swap_IP_address[+1233] [ acquire = acquire ]
+ha_oracle_rg:cl_swap_IP_address[+1235] cl_echo 7310 cl_swap_IP_address: Configuring network interface en1 with aliased IP address 192.168.4.13 cl_swap_IP_address en1 192.168.4.13
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 25 2013 19:21:45
Mar 25 2013 19:21:45 +ha_oracle_rg:cl_echo[+96] MSG_ID=7310
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 7310 cl_swap_IP_address: Configuring network interface en1 with aliased IP address 192.168.4.13 cl_swap_IP_address en1 192.168.4.13
+ha_oracle_rg:cl_echo[+98] 1>& 2
cl_swap_IP_address: Configuring network interface en1 with aliased IP address 192.168.4.13+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_swap_IP_address[+1235] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1239] clifconfig en1 alias 192.168.4.13 netmask 255.255.255.0
+ha_oracle_rg:clifconfig[51] [[ high == high ]]
+ha_oracle_rg:clifconfig[51] version=1.3
+ha_oracle_rg:clifconfig[53] set -A args en1 alias 192.168.4.13 netmask 255.255.255.0
+ha_oracle_rg:clifconfig[60] [[ -n en1 ]]
+ha_oracle_rg:clifconfig[60] [[ e != - ]]
+ha_oracle_rg:clifconfig[61] interface=en1
+ha_oracle_rg:clifconfig[62] shift
+ha_oracle_rg:clifconfig[64] [[ -n alias ]]
+ha_oracle_rg:clifconfig[66] alias=1
+ha_oracle_rg:clifconfig[90] shift
+ha_oracle_rg:clifconfig[64] [[ -n 192.168.4.13 ]]
+ha_oracle_rg:clifconfig[69] params=' address=192.168.4.13'
+ha_oracle_rg:clifconfig[69] addr=192.168.4.13
+ha_oracle_rg:clifconfig[90] shift
+ha_oracle_rg:clifconfig[64] [[ -n netmask ]]
+ha_oracle_rg:clifconfig[71] params=' address=192.168.4.13 netmask=255.255.255.0'
+ha_oracle_rg:clifconfig[71] shift
+ha_oracle_rg:clifconfig[90] shift
+ha_oracle_rg:clifconfig[64] [[ -n '' ]]
+ha_oracle_rg:clifconfig[93] [[ -n 1 ]]
+ha_oracle_rg:clifconfig[93] [[ -n ha_oracle_rg ]]
+ha_oracle_rg:clifconfig[94] clwparname ha_oracle_rg
+ha_oracle_rg:clwparname[35] [[ high == high ]]
+ha_oracle_rg:clwparname[35] version=1.3
+ha_oracle_rg:clwparname[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+ha_oracle_rg:clwparname[+20] ERRNO=0
+ha_oracle_rg:clwparname[+22] [[ high == high ]]
+ha_oracle_rg:clwparname[+22] set -x
+ha_oracle_rg:clwparname[+23] [[ high == high ]]
+ha_oracle_rg:clwparname[+23] version=1.10
+ha_oracle_rg:clwparname[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+ha_oracle_rg:clwparname[+20] [[ high == high ]]
+ha_oracle_rg:clwparname[+20] set -x
+ha_oracle_rg:clwparname[+21] [[ high == high ]]
+ha_oracle_rg:clwparname[+21] version=1.4
+ha_oracle_rg:clwparname[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+ha_oracle_rg:clwparname[+24] export PATH
+ha_oracle_rg:clwparname[+26] typeset usageErr invalArgErr internalErr
+ha_oracle_rg:clwparname[+28] usageErr=10
+ha_oracle_rg:clwparname[+29] invalArgErr=11
+ha_oracle_rg:clwparname[+30] internalErr=12
+ha_oracle_rg:clwparname[+39] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[+42] uname
+ha_oracle_rg:clwparname[+42] OSNAME=AIX
+ha_oracle_rg:clwparname[+51] [[ AIX == *AIX* ]]
+ha_oracle_rg:clwparname[+54] lslpp -l bos.wpars
+ha_oracle_rg:clwparname[+54] 1> /dev/null 2>& 1
+ha_oracle_rg:clwparname[+56] loadWparName ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+5] usage='Usage: loadWparName <resource group name>'
+ha_oracle_rg:clwparname[loadWparName+5] typeset -r usage
+ha_oracle_rg:clwparname[loadWparName+6] typeset rgName wparName wparDir rc
+ha_oracle_rg:clwparname[loadWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clwparname[loadWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clwparname[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clwparname[loadWparName+22] [[ -f /var/hacmp/adm/wpar/ha_oracle_rg ]]
+ha_oracle_rg:clwparname[loadWparName+23] cat /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+23] wparName=''
+ha_oracle_rg:clwparname[loadWparName+24] [[ -n '' ]]
+ha_oracle_rg:clwparname[loadWparName+36] return 0
+ha_oracle_rg:clwparname[+56] wparName=''
+ha_oracle_rg:clwparname[+57] rc=0
+ha_oracle_rg:clwparname[+58] (( 0 != 0 ))
+ha_oracle_rg:clwparname[+64] printf %s
+ha_oracle_rg:clwparname[+65] exit 0
+ha_oracle_rg:clifconfig[94] WPARNAME=''
+ha_oracle_rg:clifconfig[95] [[ -n '' ]]
+ha_oracle_rg:clifconfig[113] ifconfig en1 alias 192.168.4.13 netmask 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[+1264] cl_hats_adapter en1 -e 192.168.4.13 alias
+ha_oracle_rg:cl_hats_adapter[+50] [[ high = high ]]
+ha_oracle_rg:cl_hats_adapter[+50] version=1.40
+ha_oracle_rg:cl_hats_adapter[+51] +ha_oracle_rg:cl_hats_adapter[+51] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_hats_adapter[+52] +ha_oracle_rg:cl_hats_adapter[+52] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:cl_hats_adapter[+57] [[ -f /usr/es/sbin/cluster/clstrmgr ]]
+ha_oracle_rg:cl_hats_adapter[+62] clcheck_server grpsvcs
+ha_oracle_rg:clcheck_server[95] [[ high == high ]]
+ha_oracle_rg:clcheck_server[95] version=1.10.1.5
+ha_oracle_rg:clcheck_server[96] cl_get_path
+ha_oracle_rg:clcheck_server[96] HA_DIR=es
+ha_oracle_rg:clcheck_server[98] SERVER=grpsvcs
+ha_oracle_rg:clcheck_server[99] STATUS=0
+ha_oracle_rg:clcheck_server[100] FATAL_ERROR=255
+ha_oracle_rg:clcheck_server[101] retries=0
+ha_oracle_rg:clcheck_server[101] typeset -i retries
+ha_oracle_rg:clcheck_server[103] [[ -n grpsvcs ]]
+ha_oracle_rg:clcheck_server[110] [[ 0 < 3 ]]
+ha_oracle_rg:clcheck_server[114] lssrc -s grpsvcs
+ha_oracle_rg:clcheck_server[114] 1> /dev/null 2> /dev/null
+ha_oracle_rg:clcheck_server[128] egrep 'stop|active'
+ha_oracle_rg:clcheck_server[128] lssrc -s grpsvcs
+ha_oracle_rg:clcheck_server[128] LC_ALL=C
+ha_oracle_rg:clcheck_server[128] check_if_down=' grpsvcs grpsvcs 7209164 active'
+ha_oracle_rg:clcheck_server[133] [[ -z ' grpsvcs grpsvcs 7209164 active' ]]
+ha_oracle_rg:clcheck_server[154] check_server_extended grpsvcs
+ha_oracle_rg:clcheck_server[48] PS4_FUNC=check_server_extended
+ha_oracle_rg:clcheck_server[48] typeset PS4_FUNC
+ha_oracle_rg:clcheck_server[51] SERVER=grpsvcs
+ha_oracle_rg:clcheck_server[51] typeset SERVER
+ha_oracle_rg:clcheck_server[52] STATUS=1
+ha_oracle_rg:clcheck_server[52] typeset STATUS
+ha_oracle_rg:clcheck_server[60] grep -q CLSTRMGR_
+ha_oracle_rg:clcheck_server[60] lssrc -ls grpsvcs
+ha_oracle_rg:clcheck_server[60] LC_ALL=C
+ha_oracle_rg:clcheck_server[67] echo 1
+ha_oracle_rg:clcheck_server[68] return
+ha_oracle_rg:clcheck_server[154] STATUS=1
+ha_oracle_rg:clcheck_server[155] return 1
+ha_oracle_rg:cl_hats_adapter[+72] [[ 1 == 0 ]]
+ha_oracle_rg:cl_hats_adapter[+79] IF=en1
+ha_oracle_rg:cl_hats_adapter[+91] FLAG=-e
+ha_oracle_rg:cl_hats_adapter[+94] ADDRESS=192.168.4.13
+ha_oracle_rg:cl_hats_adapter[+96] ADDRESS1=alias
+ha_oracle_rg:cl_hats_adapter[+99] USEHWAT=FALSE
+ha_oracle_rg:cl_hats_adapter[+102] INDARE=FALSE
+ha_oracle_rg:cl_hats_adapter[+104] [[ alias == hwat ]]
+ha_oracle_rg:cl_hats_adapter[+104] [[ alias == dare ]]
+ha_oracle_rg:cl_hats_adapter[+118] cl_migcheck HAES
+ha_oracle_rg:cl_hats_adapter[+119] [ 0 -eq 1 ]
+ha_oracle_rg:cl_hats_adapter[+123] cldomain
+ha_oracle_rg:cl_hats_adapter[+123] export HA_DOMAIN_NAME=ha_oracle
+ha_oracle_rg:cl_hats_adapter[+125] export HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+ha_oracle_rg:cl_hats_adapter[+127] set -u
+ha_oracle_rg:cl_hats_adapter[+132] [ alias = alias ]
+ha_oracle_rg:cl_hats_adapter[+136] hats_adapter_notify en1 -e 192.168.4.13 alias
2013-03-25T19:21:45.736452 hats_adapter_notify
2013-03-25T19:21:45.755090 hats_adapter_notify
+ha_oracle_rg:cl_hats_adapter[+137] : rc_hats_adapter_notify = 0
+ha_oracle_rg:cl_hats_adapter[+139] exit 0
+ha_oracle_rg:cl_swap_IP_address[+1268] check_alias_status en1 192.168.4.13 acquire
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+5] CH_INTERFACE=en1
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+6] CH_ADDRESS=192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+7] CH_ACQ_OR_RLSE=acquire
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+15] IF_IB=en1
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] +ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] echo en1
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] awk {print index($0, "ib")}
IS_IB=0
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+17] [[ __AIX__ = __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+21] [ 0 -ne 1 ]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] +ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] clifconfig en1
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] awk {print $2}
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+23] fgrep -w 192.168.4.13
+ha_oracle_rg:clifconfig[51] [[ high == high ]]
+ha_oracle_rg:clifconfig[51] version=1.3
+ha_oracle_rg:clifconfig[53] set -A args en1
+ha_oracle_rg:clifconfig[60] [[ -n en1 ]]
+ha_oracle_rg:clifconfig[60] [[ e != - ]]
+ha_oracle_rg:clifconfig[61] interface=en1
+ha_oracle_rg:clifconfig[62] shift
+ha_oracle_rg:clifconfig[64] [[ -n '' ]]
+ha_oracle_rg:clifconfig[93] [[ -n '' ]]
+ha_oracle_rg:clifconfig[113] ifconfig en1
ADDR=192.168.4.13
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+31] [ acquire = acquire ]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+31] [[ 192.168.4.13 != 192.168.4.13 ]]
+ha_oracle_rg:cl_swap_IP_address[check_alias_status+46] return 0
+ha_oracle_rg:cl_swap_IP_address[+1268] [[ 0 -ne 0 ]]
+ha_oracle_rg:cl_swap_IP_address[+1327] flush_arp
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] arp -an
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] grep \?
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] tr -d ()
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.3.11
192.168.3.11 (192.168.3.11) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.2.10
192.168.2.10 (192.168.2.10) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.2.11
192.168.2.11 (192.168.2.11) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+5] arp -d 192.168.4.11
192.168.4.11 (192.168.4.11) deleted
+ha_oracle_rg:cl_swap_IP_address[flush_arp+4] read host addr other
+ha_oracle_rg:cl_swap_IP_address[flush_arp+7] return 0
+ha_oracle_rg:cl_swap_IP_address[+1489] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en1 1500 link#2 0.1a.64.93.a3.83 11951 0 8503 4 0
en1 1500 192.168.2 192.168.2.12 11951 0 8503 4 0
en1 1500 192.168.4 192.168.4.13 11951 0 8503 4 0
en2 1500 link#3 0.11.25.bf.95.52 6785 0 5009 3 0
en2 1500 192.168.3 192.168.3.12 6785 0 5009 3 0
en2 1500 192.168.4 192.168.4.12 6785 0 5009 3 0
lo0 16896 link#1 19060 0 19060 0 0
lo0 16896 127 127.0.0.1 19060 0 19060 0 0
lo0 16896 ::1%1 19060 0 19060 0 0
+ha_oracle_rg:cl_swap_IP_address[+1490] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
127/8 127.0.0.1 U 1 - lo0 0 0
192.168.2.0 192.168.2.12 UHSb 1 - en1 0 0 =>
192.168.2/24 192.168.2.12 U 1 - en1 0 0
192.168.2.12 127.0.0.1 UGHS 1 - lo0 0 0
192.168.2.255 192.168.2.12 UHSb 1 - en1 0 0
192.168.3.0 192.168.3.12 UHSb 1 - en2 0 0 =>
192.168.3/24 192.168.3.12 U 1 - en2 0 0
192.168.3.12 127.0.0.1 UGHS 1 - lo0 0 0
192.168.3.255 192.168.3.12 UHSb 1 - en2 0 0
192.168.4.0 192.168.4.12 UHSb 1 WRR en2 0 0 =>
192.168.4.0 192.168.4.13 UHSb 1 -"- en1 0 0 =>
192.168.4/24 192.168.4.12 U 1 WRR en2 0 0 =>
192.168.4/24 192.168.4.13 U 1 -"- en1 0 0
192.168.4.12 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.13 127.0.0.1 UGHS 1 - lo0 0 0
192.168.4.255 192.168.4.12 UHSb 1 WRR en2 0 0 =>
192.168.4.255 192.168.4.13 UHSb 1 -"- en1 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+ha_oracle_rg:cl_swap_IP_address[+1956] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_swap_IP_address[+1956] restore_ipignoreredirects
Setting ipignoreredirects to 0
+ha_oracle_rg:cl_swap_IP_address[+1958] cl_echo 32 Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0. Exit status = 0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0 0
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 25 2013 19:21:45
Mar 25 2013 19:21:45 +ha_oracle_rg:cl_echo[+96] MSG_ID=32
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 32 Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0. Exit status = 0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0 0
+ha_oracle_rg:cl_echo[+98] 1>& 2
Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en1 192.168.4.13 192.168.2.12 255.255.255.0. Exit status = 0+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_swap_IP_address[+1960] date
Mon Mar 25 19:21:46 BEIST 2013
+ha_oracle_rg:cl_swap_IP_address[+1962] exit 0
+ha_oracle_rg:acquire_service_addr[+740] RC=0
+ha_oracle_rg:acquire_service_addr[+741] [ 0 -ne 0 ]
+ha_oracle_rg:acquire_service_addr[+741] [[ false = false ]]
+ha_oracle_rg:acquire_service_addr[+754] STATUS=0
+ha_oracle_rg:acquire_service_addr[+755] TELINIT=true
+ha_oracle_rg:acquire_service_addr[+755] [[ -n ]]
+ha_oracle_rg:acquire_service_addr[+780] [ REAL = EMUL ]
+ha_oracle_rg:acquire_service_addr[+785] clcallev acquire_aconn_service en1 net_ether_01
Mar 25 19:21:46 EVENT START: acquire_aconn_service en1 net_ether_01
+ha_oracle_rg:acquire_aconn_service[+53] [[ high = high ]]
+ha_oracle_rg:acquire_aconn_service[+53] version=1.13
+ha_oracle_rg:acquire_aconn_service[+54] +ha_oracle_rg:acquire_aconn_service[+54] cl_get_path
HA_DIR=es
+ha_oracle_rg:acquire_aconn_service[+55] +ha_oracle_rg:acquire_aconn_service[+55] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:acquire_aconn_service[+57] STATUS=0
+ha_oracle_rg:acquire_aconn_service[+59] [ 2 -ne 2 ]
+ha_oracle_rg:acquire_aconn_service[+65] set -u
+ha_oracle_rg:acquire_aconn_service[+67] SERVICE_INTERFACE=en1
+ha_oracle_rg:acquire_aconn_service[+68] NETWORK=net_ether_01
+ha_oracle_rg:acquire_aconn_service[+70] +ha_oracle_rg:acquire_aconn_service[+70] cllsif -i ha_db02 -SJ ~
+ha_oracle_rg:acquire_aconn_service[+70] awk -F~ {if ( $2 == "standby" && ( $5 == "public" || $5 == "private" )) print $0}
STANDBY_ADAPTERS_INFO=
+ha_oracle_rg:acquire_aconn_service[+73] STANDBY_INTERFACES=
+ha_oracle_rg:acquire_aconn_service[+102] exit 0
Mar 25 19:21:46 EVENT COMPLETED: acquire_aconn_service en1 net_ether_01 0
+ha_oracle_rg:acquire_service_addr[+788] RC=0
+ha_oracle_rg:acquire_service_addr[+790] [ 0 -ne 0 ]
+ha_oracle_rg:acquire_service_addr[+810] [ REAL = EMUL ]
+ha_oracle_rg:acquire_service_addr[+815] cl_RMupdate resource_up All_nonerror_service_addrs acquire_service_addr
2013-03-25T19:21:46.348178
2013-03-25T19:21:46.361960
Reference string: Mon.Mar.25.19:21:46.BEIST.2013.acquire_service_addr.All_nonerror_service_addrs.ha_oracle_rg.ref
+ha_oracle_rg:acquire_service_addr[+821] [[ UNDEFINED != UNDEFINED ]]
+ha_oracle_rg:acquire_service_addr[+824] export NSORDER=
+ha_oracle_rg:acquire_service_addr[+827] [[ false = false ]]
+ha_oracle_rg:acquire_service_addr[+828] [ true = true ]
+ha_oracle_rg:acquire_service_addr[+830] cl_telinit
+ha_oracle_rg:cl_telinit[169] basename /usr/es/sbin/cluster/events/utils/cl_telinit
+ha_oracle_rg:cl_telinit[169] PROGNAME=cl_telinit
+ha_oracle_rg:cl_telinit[170] cl_get_path all
+ha_oracle_rg:cl_telinit[170] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin
+ha_oracle_rg:cl_telinit[171] cl_get_path
+ha_oracle_rg:cl_telinit[171] HA_DIR=es
+ha_oracle_rg:cl_telinit[174] TELINIT_FILE=/usr/es/sbin/cluster/.telinit
+ha_oracle_rg:cl_telinit[175] USE_TELINIT_FILE=/usr/es/sbin/cluster/.use_telinit
+ha_oracle_rg:cl_telinit[177] [[ -f /usr/es/sbin/cluster/.use_telinit ]]
+ha_oracle_rg:cl_telinit[181] USE_TELINIT=0
+ha_oracle_rg:cl_telinit[190] [[ '' == -boot ]]
+ha_oracle_rg:cl_telinit[228] cl_lsitab clinit
+ha_oracle_rg:cl_telinit[228] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_telinit[231] : telinit a disabled
+ha_oracle_rg:cl_telinit[233] return 0
+ha_oracle_rg:acquire_service_addr[+835] exit 0
Mar 25 19:21:46 EVENT COMPLETED: acquire_service_addr ser 0
+ha_oracle_rg:node_up_local[270] RC=0
+ha_oracle_rg:node_up_local[271] : exit status of acquire_service_addr is: 0
+ha_oracle_rg:node_up_local[272] (( 0 != 0 ))
+ha_oracle_rg:node_up_local[274] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:node_up_local[277] : Register the new service labels with our resource $'group\'s' NFSv4
+ha_oracle_rg:node_up_local[278] : node.
+ha_oracle_rg:node_up_local[280] : Note that cl_nfs4smctl will ignore this request if the resource group
+ha_oracle_rg:node_up_local[281] : is not using NFSv4 so we can call it blindly without checking whether
+ha_oracle_rg:node_up_local[282] : NFSv4 is in use. We could try to optimize things by checking here
+ha_oracle_rg:node_up_local[283] : whether EXPORT_FILESYSTEM_V4 is set, however if this script is called
+ha_oracle_rg:node_up_local[284] : in the context of a dare operation, then EXPORT_FILESYSTEM_V4 will
+ha_oracle_rg:node_up_local[285] : only be set if new NFSv4 exports are being added which will cause us
+ha_oracle_rg:node_up_local[286] : to skip this code in the case that NFSv4 was already configured and
+ha_oracle_rg:node_up_local[287] : a dare operation adds a new service label but $'doesn\'t' add any new
+ha_oracle_rg:node_up_local[288] : NFSv4 exports.
+ha_oracle_rg:node_up_local[290] : Also note that the only real purpose of this code here is to cover
+ha_oracle_rg:node_up_local[291] : that particular case, since cl_export_fs will be called in all other
+ha_oracle_rg:node_up_local[292] : cases and it will add the service labels as well.
+ha_oracle_rg:node_up_local[297] cl_nfs4smctl -A -N ha_oracle_rg -n ser
cl_nfs4smctl:cl_nfs4smctl[537]: [[ high == high ]]
cl_nfs4smctl:cl_nfs4smctl[537]: version=1.6
cl_nfs4smctl:cl_nfs4smctl[539]: PATH=/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin
cl_nfs4smctl:cl_nfs4smctl[542]: bootinfo -K
cl_nfs4smctl:cl_nfs4smctl[542]: 2> /dev/null
cl_nfs4smctl:cl_nfs4smctl[542]: KERNEL_BITS=64
cl_nfs4smctl:cl_nfs4smctl[543]: (( KERNEL_BITS != 64 ))
cl_nfs4smctl:cl_nfs4smctl[547]: typeset -i STATUS
cl_nfs4smctl:cl_nfs4smctl[548]: command=-A
cl_nfs4smctl:cl_nfs4smctl[548]: typeset command
cl_nfs4smctl:cl_nfs4smctl[550]: shift
cl_nfs4smctl:cl_nfs4smctl[557]: AddAddress -N ha_oracle_rg -n ser
cl_nfs4smctl:AddAddress[3]: (( 4 != 4 ))
cl_nfs4smctl:AddAddress[5]: typeset arg nodename address
cl_nfs4smctl:AddAddress[7]: (( 4 > 0 ))
cl_nfs4smctl:AddAddress[8]: arg=-N
cl_nfs4smctl:AddAddress[8]: typeset arg
cl_nfs4smctl:AddAddress[9]: shift
cl_nfs4smctl:AddAddress[12]: nodename=ha_oracle_rg
cl_nfs4smctl:AddAddress[12]: shift
cl_nfs4smctl:AddAddress[7]: (( 2 > 0 ))
cl_nfs4smctl:AddAddress[8]: arg=-n
cl_nfs4smctl:AddAddress[8]: typeset arg
cl_nfs4smctl:AddAddress[9]: shift
cl_nfs4smctl:AddAddress[13]: address=ser
cl_nfs4smctl:AddAddress[13]: shift
cl_nfs4smctl:AddAddress[7]: (( 0 > 0 ))
cl_nfs4smctl:AddAddress[18]: [[ -z ha_oracle_rg ]]
cl_nfs4smctl:AddAddress[18]: [[ -z ser ]]
cl_nfs4smctl:AddAddress[22]: host ser
cl_nfs4smctl:AddAddress[22]: LANG=C
cl_nfs4smctl:AddAddress[22]: head -1
cl_nfs4smctl:AddAddress[22]: cut '-d ' -f3
cl_nfs4smctl:AddAddress[22]: tr -d ,
cl_nfs4smctl:AddAddress[22]: ipaddr=192.168.4.13
cl_nfs4smctl:AddAddress[22]: typeset ipaddr
cl_nfs4smctl:AddAddress[24]: [[ -z 192.168.4.13 ]]
cl_nfs4smctl:AddAddress[28]: setenvnode ha_oracle_rg
cl_nfs4smctl:QueryNode[3]: typeset cmd nodename delimiter line result
cl_nfs4smctl:QueryNode[4]: cmd=-Q
cl_nfs4smctl:QueryNode[6]: (( 4 > 0 ))
cl_nfs4smctl:QueryNode[7]: arg=-N
cl_nfs4smctl:QueryNode[7]: typeset arg
cl_nfs4smctl:QueryNode[8]: shift
cl_nfs4smctl:QueryNode[11]: nodename=ha_oracle_rg
cl_nfs4smctl:QueryNode[11]: shift
cl_nfs4smctl:QueryNode[11]: cmd='-Q -N ha_oracle_rg'
cl_nfs4smctl:QueryNode[6]: (( 2 > 0 ))
cl_nfs4smctl:QueryNode[7]: arg=-d
cl_nfs4smctl:QueryNode[7]: typeset arg
cl_nfs4smctl:QueryNode[8]: shift
cl_nfs4smctl:QueryNode[12]: grep value
cl_nfs4smctl:QueryNode[12]: awk '{print $3}'
cl_nfs4smctl:QueryNode[12]: /usr/bin/odmget -q group='ha_oracle_rg AND name=SERVICE_LABEL' HACMPresource
cl_nfs4smctl:QueryNode[12]: sed -e 's/"//g'
cl_nfs4smctl:QueryNode[12]: label=ser
cl_nfs4smctl:QueryNode[13]: grep identifier
cl_nfs4smctl:QueryNode[13]: awk '{print $3}'
cl_nfs4smctl:QueryNode[13]: /usr/bin/odmget -q ip_label=ser HACMPadapter
cl_nfs4smctl:QueryNode[13]: sed -e 's/"//g'
cl_nfs4smctl:QueryNode[13]: address=192.168.4.13
cl_nfs4smctl:QueryNode[14]: isipv6addr 192.168.4.13
cl_nfs4smctl:QueryNode[14]: address=''
cl_nfs4smctl:QueryNode[15]: (( 0 != 0 ))
cl_nfs4smctl:QueryNode[20]: v6_add=1
cl_nfs4smctl:QueryNode[21]: delimiter='|'
cl_nfs4smctl:QueryNode[21]: shift
cl_nfs4smctl:QueryNode[21]: cmd='-Q -N ha_oracle_rg -S \~'
cl_nfs4smctl:QueryNode[6]: (( 0 > 0 ))
cl_nfs4smctl:QueryNode[27]: [[ -n '|' ]]
cl_nfs4smctl:QueryNode[28]: result=1
cl_nfs4smctl:QueryNode[29]: nfs4smctl -Q -N ha_oracle_rg -S '\~'
cl_nfs4smctl:QueryNode[29]: 2> /dev/null
cl_nfs4smctl:QueryNode[29]: read line
cl_nfs4smctl:QueryNode[45]: return 1
cl_nfs4smctl:AddAddress[31]: [[ -z '' ]]
cl_nfs4smctl:AddAddress[32]: return 0
cl_nfs4smctl:cl_nfs4smctl[558]: STATUS=0
cl_nfs4smctl:cl_nfs4smctl[582]: (( STATUS == 2 ))
cl_nfs4smctl:cl_nfs4smctl[586]: exit 0
+ha_oracle_rg:node_up_local[298] RC=0
+ha_oracle_rg:node_up_local[299] : exit status of cl_nfs4smctl is: 0
+ha_oracle_rg:node_up_local[300] (( 0 != 0 && STATUS == 0 ))
+ha_oracle_rg:node_up_local[307] : Acquire service address on standby adapter if we are not the
+ha_oracle_rg:node_up_local[308] : highest priority node for that resource. Determined by environment
+ha_oracle_rg:node_up_local[309] : variables.
+ha_oracle_rg:node_up_local[311] [[ -n '' ]]
+ha_oracle_rg:node_up_local[321] (( 0 == 1 ))
+ha_oracle_rg:node_up_local[332] : Mount filesystems, varyon volume groups, make disks available,
+ha_oracle_rg:node_up_local[333] : and export filesystems if it is not done yet.
+ha_oracle_rg:node_up_local[335] [[ false != true ]]
+ha_oracle_rg:node_up_local[337] get_fileysystems
+ha_oracle_rg:node_up_local[61] PS4_FUNC=get_fileysystems
+ha_oracle_rg:node_up_local[61] typeset PS4_FUNC
+ha_oracle_rg:node_up_local[62] [[ high == high ]]
+ha_oracle_rg:node_up_local[62] set -x
+ha_oracle_rg:node_up_local[64] typeset -i rc
+ha_oracle_rg:node_up_local[65] set +u
+ha_oracle_rg:node_up_local[66] OEM_FILESYSTEM=''
+ha_oracle_rg:node_up_local[67] OEM_VOLUME_GROUP=''
+ha_oracle_rg:node_up_local[68] set -u
+ha_oracle_rg:node_up_local[70] [[ __AIX__ == __LINUX__ ]]
+ha_oracle_rg:node_up_local[72] clcallev get_disk_vg_fs ALL datavg '' '' ''
Mar 25 19:21:48 EVENT START: get_disk_vg_fs ALL datavg
+ha_oracle_rg:get_disk_vg_fs[66] [[ high == high ]]
+ha_oracle_rg:get_disk_vg_fs[66] version=1.88
+ha_oracle_rg:get_disk_vg_fs[68] STATUS=0
+ha_oracle_rg:get_disk_vg_fs[70] FILE_SYSTEMS=ALL
+ha_oracle_rg:get_disk_vg_fs[71] VOLUME_GROUPS=datavg
+ha_oracle_rg:get_disk_vg_fs[72] PVID_LIST=''
+ha_oracle_rg:get_disk_vg_fs[73] OEM_FILE_SYSTEMS=''
+ha_oracle_rg:get_disk_vg_fs[74] OEM_VOLUME_GROUPS=''
+ha_oracle_rg:get_disk_vg_fs[75] HDISK_LIST=''
+ha_oracle_rg:get_disk_vg_fs[76] UNMOUNTED_FS=''
+ha_oracle_rg:get_disk_vg_fs[77] OEM_UNMOUNTED_FS=''
+ha_oracle_rg:get_disk_vg_fs[78] INACTIVE_VGS=''
+ha_oracle_rg:get_disk_vg_fs[79] sddsrv_off=FALSE
+ha_oracle_rg:get_disk_vg_fs[80] SKIPBRKRES=0
+ha_oracle_rg:get_disk_vg_fs[80] typeset -i SKIPBRKRES
+ha_oracle_rg:get_disk_vg_fs[81] SKIPVARYON=0
+ha_oracle_rg:get_disk_vg_fs[81] typeset -i SKIPVARYON
+ha_oracle_rg:get_disk_vg_fs[82] DEF_VARYON_ACTION=0
+ha_oracle_rg:get_disk_vg_fs[82] typeset -i DEF_VARYON_ACTION
+ha_oracle_rg:get_disk_vg_fs[83] odmget HACMPrepresource
+ha_oracle_rg:get_disk_vg_fs[83] 2> /dev/null
+ha_oracle_rg:get_disk_vg_fs[83] HACMPREPRESOURCE_ODM=''
+ha_oracle_rg:get_disk_vg_fs[85] [[ -n ALL ]]
+ha_oracle_rg:get_disk_vg_fs[88] : If no volume group names were passed in, retrieve them from
+ha_oracle_rg:get_disk_vg_fs[89] : the resource group defintion
+ha_oracle_rg:get_disk_vg_fs[91] [[ -z datavg ]]
+ha_oracle_rg:get_disk_vg_fs[98] : If filesystems are given and not already mounted,
+ha_oracle_rg:get_disk_vg_fs[99] : figure out the associated volume groups
+ha_oracle_rg:get_disk_vg_fs[101] [[ ALL != ALL ]]
+ha_oracle_rg:get_disk_vg_fs[129] : Process OEM Volume Groups in a similar manner
+ha_oracle_rg:get_disk_vg_fs[131] [[ -n '' ]]
+ha_oracle_rg:get_disk_vg_fs[162] : Process volume groups
+ha_oracle_rg:get_disk_vg_fs[164] [[ -n datavg ]]
+ha_oracle_rg:get_disk_vg_fs[166] lsvg -L -o
+ha_oracle_rg:get_disk_vg_fs[166] 1> /tmp/lsvg.out.6225978 2> /tmp/lsvg.err
+ha_oracle_rg:get_disk_vg_fs[170] : Check to see if the volume group is both $'vary\'d' on, and readable
+ha_oracle_rg:get_disk_vg_fs[171] : by LVM - e.g., not closed due to lack of quorum.
+ha_oracle_rg:get_disk_vg_fs[173] grep -qx datavg /tmp/lsvg.out.6225978
+ha_oracle_rg:get_disk_vg_fs[180] lsvg -L datavg
+ha_oracle_rg:get_disk_vg_fs[180] 2> /dev/null
+ha_oracle_rg:get_disk_vg_fs[180] grep -i -q passive-only
+ha_oracle_rg:get_disk_vg_fs[192] : Append to the previous HDISK_LIST list.
+ha_oracle_rg:get_disk_vg_fs[194] cl_fs2disk -pg datavg
+ha_oracle_rg:get_disk_vg_fs[194] HDISK_LIST=' hdisk2'
+ha_oracle_rg:get_disk_vg_fs[195] INACTIVE_VGS=' datavg'
+ha_oracle_rg:get_disk_vg_fs[199] [[ -s /tmp/lsvg.err ]]
+ha_oracle_rg:get_disk_vg_fs[208] : Remove any duplicates from the list of volume groups to vary on
+ha_oracle_rg:get_disk_vg_fs[210] echo datavg
+ha_oracle_rg:get_disk_vg_fs[210] tr ' ' '\n'
+ha_oracle_rg:get_disk_vg_fs[210] sort -u
+ha_oracle_rg:get_disk_vg_fs[210] INACTIVE_VGS=datavg
+ha_oracle_rg:get_disk_vg_fs[215] : Get OEM Volume groups that are not already active
+ha_oracle_rg:get_disk_vg_fs[217] ALL_OEM_VOLUME_GROUPS=' '
+ha_oracle_rg:get_disk_vg_fs[219] [[ -n ' ' ]]
+ha_oracle_rg:get_disk_vg_fs[264] : Remove any duplicates from the list of volume groups to vary on
+ha_oracle_rg:get_disk_vg_fs[266] echo
+ha_oracle_rg:get_disk_vg_fs[266] tr ' ' '\n'
+ha_oracle_rg:get_disk_vg_fs[266] sort -u
+ha_oracle_rg:get_disk_vg_fs[266] OEM_INACTIVE_VGS=''
+ha_oracle_rg:get_disk_vg_fs[271] : Call replicated resource predisk-available method associated
+ha_oracle_rg:get_disk_vg_fs[272] : with any replicated resource defined in the resource group
+ha_oracle_rg:get_disk_vg_fs[273] : we are currently processing.
+ha_oracle_rg:get_disk_vg_fs[275] [[ -n datavg ]]
+ha_oracle_rg:get_disk_vg_fs[275] odmget HACMPrresmethods
+ha_oracle_rg:get_disk_vg_fs[275] [[ -n '' ]]
+ha_oracle_rg:get_disk_vg_fs[275] [[ -n '' ]]
+ha_oracle_rg:get_disk_vg_fs[304] : If Physical Volume IDs are given, figure out associated DISKs.
+ha_oracle_rg:get_disk_vg_fs[306] [[ -n '' ]]
+ha_oracle_rg:get_disk_vg_fs[324] : Take out any duplicate items in disk, volume group, and file_systems lists,
+ha_oracle_rg:get_disk_vg_fs[325] : Then call the individual script to make disks available, varyon volume
+ha_oracle_rg:get_disk_vg_fs[326] : groups, and mount filesystems.
+ha_oracle_rg:get_disk_vg_fs[328] [[ -n ' hdisk2' ]]
+ha_oracle_rg:get_disk_vg_fs[331] : Remove any duplicates that may have crept in
+ha_oracle_rg:get_disk_vg_fs[333] echo hdisk2
+ha_oracle_rg:get_disk_vg_fs[333] tr ' ' '\n'
+ha_oracle_rg:get_disk_vg_fs[333] sort -u
+ha_oracle_rg:get_disk_vg_fs[333] HDISK_LIST=hdisk2
+ha_oracle_rg:get_disk_vg_fs[336] : If the sddsrv daemon is running - vpath dead path detection and
+ha_oracle_rg:get_disk_vg_fs[337] : recovery - turn it off, since interactions with the fibre channel
+ha_oracle_rg:get_disk_vg_fs[338] : device driver will, in the case where there actually is a dead path,
+ha_oracle_rg:get_disk_vg_fs[339] : slow down every vpath operation.
+ha_oracle_rg:get_disk_vg_fs[341] echo hdisk2
+ha_oracle_rg:get_disk_vg_fs[341] grep -q vpath
+ha_oracle_rg:get_disk_vg_fs[415] : Break any reserverations, and make the disks available
+ha_oracle_rg:get_disk_vg_fs[417] (( 0 == 0 ))
+ha_oracle_rg:get_disk_vg_fs[419] cl_disk_available hdisk2
+ha_oracle_rg:cl_disk_available[1579] [[ high == high ]]
+ha_oracle_rg:cl_disk_available[1579] version=1.2.5.68
+ha_oracle_rg:cl_disk_available[1580] [[ high != high ]]
+ha_oracle_rg:cl_disk_available[1582] HA_DIR=es
+ha_oracle_rg:cl_disk_available[1584] JUST_DISKS=false
+ha_oracle_rg:cl_disk_available[1585] NO_RETAIN_RES=''
+ha_oracle_rg:cl_disk_available[1587] getopts :sv option
+ha_oracle_rg:cl_disk_available[1612] shift 0
+ha_oracle_rg:cl_disk_available[1614] ghostlist=''
+ha_oracle_rg:cl_disk_available[1615] disklist=''
+ha_oracle_rg:cl_disk_available[1616] pidlist=''
+ha_oracle_rg:cl_disk_available[1619] : Global variables to define associative arrays for types FSCSI SCSIDISK
+ha_oracle_rg:cl_disk_available[1620] : ISCSI: This will be used to collect the disks by owning adapter, to handle
+ha_oracle_rg:cl_disk_available[1621] : in parallel. The index is the adapter name, the element is a list of
+ha_oracle_rg:cl_disk_available[1622] : disks on that adapter. E.g.,
+ha_oracle_rg:cl_disk_available[1623] :
+ha_oracle_rg:cl_disk_available[1624] : 'fscarray[fscsi0]=hdisk11 hdisk22 hdisk33'
+ha_oracle_rg:cl_disk_available[1626] typeset -A fscarray
+ha_oracle_rg:cl_disk_available[1627] typeset -A pscarray
+ha_oracle_rg:cl_disk_available[1628] typeset -A iscarray
+ha_oracle_rg:cl_disk_available[1629] typeset -A rscarray
+ha_oracle_rg:cl_disk_available[1630] typeset -A sasarray
+ha_oracle_rg:cl_disk_available[1633] : List of vpath devices to be processed after the underlying hdisks are
+ha_oracle_rg:cl_disk_available[1634] : cleaned up
+ha_oracle_rg:cl_disk_available[1636] typeset dpolist
+ha_oracle_rg:cl_disk_available[1639] : This routine can be invoked in two different fashions:
+ha_oracle_rg:cl_disk_available[1640] : - resource groups processed serially
+ha_oracle_rg:cl_disk_available[1641] : A list of disks are passed as arguments
+ha_oracle_rg:cl_disk_available[1642] : - resource groups are processed in parallel
+ha_oracle_rg:cl_disk_available[1643] : Resource groups, their disks, and owning
+ha_oracle_rg:cl_disk_available[1644] : volume groups are passed as environment
+ha_oracle_rg:cl_disk_available[1645] : variables
+ha_oracle_rg:cl_disk_available[1647] [[ false == false ]]
+ha_oracle_rg:cl_disk_available[1647] [[ -n '' ]]
+ha_oracle_rg:cl_disk_available[1745] : If not called from process_resources, arguments are needed on
+ha_oracle_rg:cl_disk_available[1746] : the command line
+ha_oracle_rg:cl_disk_available[1748] PROC_RES=false
+ha_oracle_rg:cl_disk_available[1750] (( 1 == 0 ))
+ha_oracle_rg:cl_disk_available[1755] [[ false == false ]]
+ha_oracle_rg:cl_disk_available[1757] : If not called during event processing, skip the resource manager
+ha_oracle_rg:cl_disk_available[1758] : update.
+ha_oracle_rg:cl_disk_available[1760] ALLDISKS=All_disks
+ha_oracle_rg:cl_disk_available[1761] cl_RMupdate resource_acquiring All_disks cl_disk_available
2013-03-25T19:21:48.381970
2013-03-25T19:21:48.395779
Reference string: Mon.Mar.25.19:21:48.BEIST.2013.cl_disk_available.All_disks.ha_oracle_rg.ref
+ha_oracle_rg:cl_disk_available[1765] : Requests from serially processed resource groups
+ha_oracle_rg:cl_disk_available[1769] make_disk_available hdisk2
+ha_oracle_rg:cl_disk_available[4] disk=hdisk2
+ha_oracle_rg:cl_disk_available[7] : Determine the owning adapter. This will be used to determine the
+ha_oracle_rg:cl_disk_available[8] : disk type.
+ha_oracle_rg:cl_disk_available[10] lsdev -Cc disk -l hdisk2 -F parent
+ha_oracle_rg:cl_disk_available[10] parent=iscsi0
+ha_oracle_rg:cl_disk_available[11] lsdev -Cc disk -l hdisk2 -F PdDvLn
+ha_oracle_rg:cl_disk_available[11] PdDvLn=disk/iscsi/osdisk
+ha_oracle_rg:cl_disk_available[14] : Fetch the PdDvLn entry from ODM if lsdev cant get it.
+ha_oracle_rg:cl_disk_available[16] [[ -z disk/iscsi/osdisk ]]
+ha_oracle_rg:cl_disk_available[21] : Do not check Remote Physical Volume Client disk.
+ha_oracle_rg:cl_disk_available[22] : If PdDvLn=disk/remote_disk/rpvclient there is no parent.
+ha_oracle_rg:cl_disk_available[24] [[ disk/iscsi/osdisk != disk/remote_disk/rpvclient ]]
+ha_oracle_rg:cl_disk_available[26] [[ -z iscsi0 ]]
+ha_oracle_rg:cl_disk_available[26] [[ -z disk/iscsi/osdisk ]]
+ha_oracle_rg:cl_disk_available[46] : Determine the disk type. The mechanism used to break the reserves is
+ha_oracle_rg:cl_disk_available[47] : dependent on the type.
+ha_oracle_rg:cl_disk_available[49] disktype=UNKNOWN
+ha_oracle_rg:cl_disk_available[52] : First, check to see if this disk should be treated as equivalent to
+ha_oracle_rg:cl_disk_available[53] : one of the known types. This check is done first, because it is more
+ha_oracle_rg:cl_disk_available[54] : specific than the 'known types' checking below, which is primarily
+ha_oracle_rg:cl_disk_available[55] : based on owning adapter type.
+ha_oracle_rg:cl_disk_available[57] OEMDISKTYPES=/etc/cluster/disktype.lst
+ha_oracle_rg:cl_disk_available[67] [[ -r /etc/cluster/disktype.lst ]]
+ha_oracle_rg:cl_disk_available[67] [[ -s /etc/cluster/disktype.lst ]]
+ha_oracle_rg:cl_disk_available[68] cut -f2
+ha_oracle_rg:cl_disk_available[68] grep -w ^disk/iscsi/osdisk /etc/cluster/disktype.lst
+ha_oracle_rg:cl_disk_available[68] oemdisktype=''
+ha_oracle_rg:cl_disk_available[69] [[ '' == @(SCSIDISK|SSA|FCPARRAY|ARRAY|FSCSI|FLUTE) ]]
+ha_oracle_rg:cl_disk_available[76] : Check to see if the disk has custom methods.
+ha_oracle_rg:cl_disk_available[78] parallel=false
+ha_oracle_rg:cl_disk_available[79] makedev=MKDEV
+ha_oracle_rg:cl_disk_available[80] checkres=SCSI_TUR
+ha_oracle_rg:cl_disk_available[81] breakres=''
+ha_oracle_rg:cl_disk_available[82] [[ UNKNOWN == UNKNOWN ]]
+ha_oracle_rg:cl_disk_available[84] : Read the custom methods from the HACMPdisktype ODM class. This
+ha_oracle_rg:cl_disk_available[85] : will instatiate the following variables:
+ha_oracle_rg:cl_disk_available[86] : ghostdisks - routine to find ghost disks
+ha_oracle_rg:cl_disk_available[87] : checkres - routine to check is a reserve is held
+ha_oracle_rg:cl_disk_available[88] : breakres - routine to break reserves
+ha_oracle_rg:cl_disk_available[89] : parallel - true = break reserves in parallel
+ha_oracle_rg:cl_disk_available[89] 1> can
+ha_oracle_rg:cl_disk_available[90] : makedev - routine to make device available
+ha_oracle_rg:cl_disk_available[92] sed -n 's/ = /=/p'
+ha_oracle_rg:cl_disk_available[92] odmget -q 'PdDvLn = disk/iscsi/osdisk' HACMPdisktype
+ha_oracle_rg:cl_disk_available[92] eval
+ha_oracle_rg:cl_disk_available[93] [[ -n '' ]]
+ha_oracle_rg:cl_disk_available[99] : Check for disk type Remote Physical Volume Client
+ha_oracle_rg:cl_disk_available[101] [[ UNKNOWN == UNKNOWN ]]
+ha_oracle_rg:cl_disk_available[103] [[ disk/iscsi/osdisk == disk/remote_disk/rpvclient ]]
+ha_oracle_rg:cl_disk_available[109] [[ UNKNOWN == UNKNOWN ]]
+ha_oracle_rg:cl_disk_available[111] : Normal disk type checking, in lieu of any overrides or added
+ha_oracle_rg:cl_disk_available[112] : customizations
+ha_oracle_rg:cl_disk_available[156] disktype=ISCSI
+ha_oracle_rg:cl_disk_available[199] : Check to see if the device is available. We can tell this by
+ha_oracle_rg:cl_disk_available[200] : calling lsdev with the logical name of the device -l and
+ha_oracle_rg:cl_disk_available[201] : asking to see the devices that have a status of Available '-S A'
+ha_oracle_rg:cl_disk_available[202] : we limit the output to the status field '-F status.'
+ha_oracle_rg:cl_disk_available[204] lsdev -Cc disk -l hdisk2 -S A -F status
+ha_oracle_rg:cl_disk_available[204] [[ -n Available ]]
+ha_oracle_rg:cl_disk_available[207] : Break the reserve on an available disk. Where supported, do so in
+ha_oracle_rg:cl_disk_available[208] : a background task that will run asynchronously to this one.
+ha_oracle_rg:cl_disk_available[250] : Collect the ISCSI disks by owning adapter, for
+ha_oracle_rg:cl_disk_available[251] : later processing in parallel.
+ha_oracle_rg:cl_disk_available[253] iscarray[iscsi0]=hdisk2
+ha_oracle_rg:cl_disk_available[1774] : Take care of all the fibre scsi disks, doing all those associated
+ha_oracle_rg:cl_disk_available[1775] : with a specific adapter at one time
+ha_oracle_rg:cl_disk_available[1782] : Take care of all the parallel scsi disks, doing all those associated
+ha_oracle_rg:cl_disk_available[1783] : with a specific adapter at one time
+ha_oracle_rg:cl_disk_available[1790] : Take care of all the iscsi disks, doing all those associated
+ha_oracle_rg:cl_disk_available[1791] : with a specific adapter at one time
+ha_oracle_rg:cl_disk_available[1794] make_disktypes_available iscsi0 ISCSI hdisk2
+ha_oracle_rg:cl_disk_available[5] parent=iscsi0
+ha_oracle_rg:cl_disk_available[5] typeset parent
+ha_oracle_rg:cl_disk_available[6] disktype=ISCSI
+ha_oracle_rg:cl_disk_available[6] typeset disktype
+ha_oracle_rg:cl_disk_available[7] shift 2
+ha_oracle_rg:cl_disk_available[8] disks=hdisk2
+ha_oracle_rg:cl_disk_available[8] typeset disks
+ha_oracle_rg:cl_disk_available[11] : Scan through the list of given disks, checking each one
+ha_oracle_rg:cl_disk_available[12] : for possible ghost disks, which must be later removed
+ha_oracle_rg:cl_disk_available[17] : The disk is not available
+ha_oracle_rg:cl_disk_available[19] lsdev -Cc disk -l hdisk2 -S A -F status
+ha_oracle_rg:cl_disk_available[19] [[ -z Available ]]
+ha_oracle_rg:cl_disk_available[50] : Break the reserve - pass the given list of disks having the given parent
+ha_oracle_rg:cl_disk_available[115] : Perform a LUN reset on an iSCSI device
+ha_oracle_rg:cl_disk_available[117] cl_iscsilunreset iscsi0 hdisk2
cl_iscsilunreset[855]: ioctl SCIOLSTART lun=0 iSCSI name=iqn.2003-06.com.rocketdivision.starwind.hky-pc-nb.imagefile1 port number=0XCBC address type=IPv4
: Invalid argument
cl_iscsilunreset[430]: openx(hdisk2, O_RDWR, 0, SC_RETAIN_RESERVATION + SC_FORCED_OPEN_LUN): Invalid argument
cl_iscsilunreset[441]: openx(hdisk2, O_RDWR, 0, + SC_FORCED_OPEN): Invalid argument
cl_iscsilunreset[224]: version 1.3
cl_iscsilunreset[250]: open(/dev/iscsi0, O_RDWR)
cl_iscsilunreset[310]: get_sid_lun(hdisk2)
cl_iscsilunreset[530]: odm_get_first(name=hdisk2 AND attribute=lun_id) = 0x0
cl_iscsilunreset[582]: odm_get_first(name=hdisk2 AND attribute=target_name) = iqn.2003-06.com.rocketdivision.starwind.hky-pc-nb.imagefile1
cl_iscsilunreset[628]: odm_get_first(name=hdisk2 AND attribute=port_num) = 0xcbc
cl_iscsilunreset[674]: odm_get_first(name=hdisk2 AND attribute=host_addr) = 192.168.2.10
cl_iscsilunreset[684]: name2ip_addr(192.168.2.10)
cl_iscsilunreset[773]: getaddrinfo(192.168.2.10)
cl_iscsilunreset[731]: hdisk2 lun=0 iSCSI name=iqn.2003-06.com.rocketdivision.starwind.hky-pc-nb.imagefile1 port number=0XCBC address type=IPv4 IP address=192.168.2.10
cl_iscsilunreset[325]: device_connect(hdisk2)
cl_iscsilunreset[486]: close(/dev/iscsi0)
+ha_oracle_rg:cl_disk_available[1798] : Take care of all the RAID scsi disks, doing all those associated
+ha_oracle_rg:cl_disk_available[1799] : with a specific adapter at one time
+ha_oracle_rg:cl_disk_available[1806] : Take care of all SAS disks, doing all those associated with a
+ha_oracle_rg:cl_disk_available[1807] : specific adapter at one time
+ha_oracle_rg:cl_disk_available[1814] : wait to sync any background processes still breaking reserves
+ha_oracle_rg:cl_disk_available[1816] [[ -n '' ]]
+ha_oracle_rg:cl_disk_available[1822] : If there were ghost disk profiles queued up to be removed, do so now.
+ha_oracle_rg:cl_disk_available[1831] : If there were disks to make available, do so now
+ha_oracle_rg:cl_disk_available[1844] : Having finally cleaned up any reserves or ghost disks on the underlying
+ha_oracle_rg:cl_disk_available[1845] : hdisks, break any persistent reserve on vpath devices.
+ha_oracle_rg:cl_disk_available[1852] : Go back and check to see if the various disks came on line, and update
+ha_oracle_rg:cl_disk_available[1853] : the cluster manager status as appropriate
+ha_oracle_rg:cl_disk_available[1855] [[ false == true ]]
+ha_oracle_rg:cl_disk_available[1887] : Resource groups processed serially: check all the disks passed on the command line
+ha_oracle_rg:cl_disk_available[1891] verify_disk_availability hdisk2
+ha_oracle_rg:cl_disk_available[4] disk=hdisk2
+ha_oracle_rg:cl_disk_available[4] typeset disk
+ha_oracle_rg:cl_disk_available[7] : Do not do this check for a Remote Physical Volume Client
+ha_oracle_rg:cl_disk_available[8] : disk. The makedev for a Remote Physical Volume Client disk
+ha_oracle_rg:cl_disk_available[9] : is done in the predisk_available Replicated Resource Method.
+ha_oracle_rg:cl_disk_available[11] lsdev -Cc disk -l hdisk2 -F PdDvLn
+ha_oracle_rg:cl_disk_available[11] PdDvLn=disk/iscsi/osdisk
+ha_oracle_rg:cl_disk_available[12] [[ disk/iscsi/osdisk == disk/remote_disk/rpvclient ]]
+ha_oracle_rg:cl_disk_available[17] lsdev -Cc disk -l hdisk2 -S A -F status
+ha_oracle_rg:cl_disk_available[17] LC_ALL=C
+ha_oracle_rg:cl_disk_available[17] [[ -z Available ]]
+ha_oracle_rg:cl_disk_available[25] : Note that the resource manager is not updated with the status of the
+ha_oracle_rg:cl_disk_available[26] : individual disks. This is because loss of a disk is not necessarily
+ha_oracle_rg:cl_disk_available[27] : severe enough to stop the event - varyonvg may still work.
+ha_oracle_rg:cl_disk_available[1896] : Update the resource manager with the disks that came on line
+ha_oracle_rg:cl_disk_available[1898] ALLNOERR=All_nonerror_disks
+ha_oracle_rg:cl_disk_available[1899] [[ false == true ]]
+ha_oracle_rg:cl_disk_available[1903] [[ false == false ]]
+ha_oracle_rg:cl_disk_available[1904] cl_RMupdate resource_up All_nonerror_disks cl_disk_available
2013-03-25T19:21:49.069633
2013-03-25T19:21:49.088314
Reference string: Mon.Mar.25.19:21:49.BEIST.2013.cl_disk_available.All_nonerror_disks.ha_oracle_rg.ref
+ha_oracle_rg:cl_disk_available[1907] exit 0
+ha_oracle_rg:get_disk_vg_fs[427] : Call replicated resource prevg-online method associated with any
+ha_oracle_rg:get_disk_vg_fs[428] : replicated resource that is a member of the resource group
+ha_oracle_rg:get_disk_vg_fs[429] : we are currently processing. Note that a return code of 3 from
+ha_oracle_rg:get_disk_vg_fs[430] : the prevg-online method indicates the default action should not
+ha_oracle_rg:get_disk_vg_fs[431] : happen. The default action for the online_primary case is to
+ha_oracle_rg:get_disk_vg_fs[432] : varyon the VG. The default action for the online_secondary
+ha_oracle_rg:get_disk_vg_fs[433] : case is to NOT varyon the VG
+ha_oracle_rg:get_disk_vg_fs[435] [[ -n datavg ]]
+ha_oracle_rg:get_disk_vg_fs[435] odmget HACMPrresmethods
+ha_oracle_rg:get_disk_vg_fs[435] [[ -n '' ]]
+ha_oracle_rg:get_disk_vg_fs[435] [[ -n '' ]]
+ha_oracle_rg:get_disk_vg_fs[460] [[ ACQUIRE == ACQUIRE ]]
+ha_oracle_rg:get_disk_vg_fs[463] : This is an online_primary case so an override
+ha_oracle_rg:get_disk_vg_fs[464] : from the RR method means we skip the varyon
+ha_oracle_rg:get_disk_vg_fs[465] : since the default action is to varyon
+ha_oracle_rg:get_disk_vg_fs[467] SKIPVARYON=0
+ha_oracle_rg:get_disk_vg_fs[469] (( 0 == 1 ))
+ha_oracle_rg:get_disk_vg_fs[487] [[ -n datavg ]]
+ha_oracle_rg:get_disk_vg_fs[490] : Remove any duplicates from the list of volume groups to vary on
+ha_oracle_rg:get_disk_vg_fs[492] echo datavg
+ha_oracle_rg:get_disk_vg_fs[492] tr ' ' '\n'
+ha_oracle_rg:get_disk_vg_fs[492] sort -u
+ha_oracle_rg:get_disk_vg_fs[492] INACTIVE_VGS=datavg
+ha_oracle_rg:get_disk_vg_fs[495] : Vary on the volume groups, making any ODM updates necessary
+ha_oracle_rg:get_disk_vg_fs[497] (( 0 == 0 ))
+ha_oracle_rg:get_disk_vg_fs[499] cl_activate_vgs -n datavg
+ha_oracle_rg:cl_activate_vgs[194] version=1.40
+ha_oracle_rg:cl_activate_vgs[196] cl_get_path
+ha_oracle_rg:cl_activate_vgs[196] HA_DIR=es
+ha_oracle_rg:cl_activate_vgs[198] STATUS=0
+ha_oracle_rg:cl_activate_vgs[198] typeset -i STATUS
+ha_oracle_rg:cl_activate_vgs[199] SYNCFLAG=''
+ha_oracle_rg:cl_activate_vgs[200] CLENV=''
+ha_oracle_rg:cl_activate_vgs[201] TMP_FILENAME=/tmp/_activate_vgs.tmp
+ha_oracle_rg:cl_activate_vgs[202] USE_OEM_METHODS=false
+ha_oracle_rg:cl_activate_vgs[204] EMULATE=''
+ha_oracle_rg:cl_activate_vgs[206] PROC_RES=false
+ha_oracle_rg:cl_activate_vgs[210] [[ 0 != 0 ]]
+ha_oracle_rg:cl_activate_vgs[217] [[ -n == -n ]]
+ha_oracle_rg:cl_activate_vgs[219] SYNCFLAG=-n
+ha_oracle_rg:cl_activate_vgs[220] shift
+ha_oracle_rg:cl_activate_vgs[225] [[ datavg == -c ]]
+ha_oracle_rg:cl_activate_vgs[232] set -u
+ha_oracle_rg:cl_activate_vgs[235] rm -f /tmp/_activate_vgs.tmp
+ha_oracle_rg:cl_activate_vgs[239] lsvg -L -o
+ha_oracle_rg:cl_activate_vgs[239] print rootvg
+ha_oracle_rg:cl_activate_vgs[239] VGSTATUS=rootvg
+ha_oracle_rg:cl_activate_vgs[242] ALLVGS=All_volume_groups
+ha_oracle_rg:cl_activate_vgs[243] [[ '' == EMUL ]]
+ha_oracle_rg:cl_activate_vgs[248] cl_RMupdate resource_acquiring All_volume_groups cl_activate_vgs
2013-03-25T19:21:49.636024
2013-03-25T19:21:49.664972
Reference string: Mon.Mar.25.19:21:49.BEIST.2013.cl_activate_vgs.All_volume_groups.ha_oracle_rg.ref
+ha_oracle_rg:cl_activate_vgs[253] [[ false == false ]]
+ha_oracle_rg:cl_activate_vgs[258] (( 1 == 0 ))
+ha_oracle_rg:cl_activate_vgs[266] [[ false == false ]]
+ha_oracle_rg:cl_activate_vgs[268] vgs_list datavg
+ha_oracle_rg:cl_activate_vgs[156] PS4_FUNC=vgs_list
+ha_oracle_rg:cl_activate_vgs[156] PS4_LOOP=''
+ha_oracle_rg:cl_activate_vgs[156] typeset PS4_FUNC PS4_LOOP
+ha_oracle_rg:cl_activate_vgs:datavg[160] PS4_LOOP=datavg
+ha_oracle_rg:cl_activate_vgs:datavg[164] [[ rootvg == @(?(*\ )datavg?(\ *)) ]]
+ha_oracle_rg:cl_activate_vgs:datavg[171] [[ '' == EMUL ]]
+ha_oracle_rg:cl_activate_vgs:datavg[176] vgs_chk datavg -n cl_activate_vgs
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+5] VG=datavg
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+5] typeset VG
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+6] SYNCFLAG=-n
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+6] typeset SYNCFLAG
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+7] PROGNAME=cl_activate_vgs
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+7] typeset PROGNAME
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+8] STATUS=0
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+8] typeset -i STATUS
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+10] [[ -n '' ]]
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+24] typeset -x ERRMSG
+ha_oracle_rg:cl_activate_vgs:datavg[180] unset PS4_LOOP PS4_TIMER
+ha_oracle_rg:cl_activate_vgs[295] wait
+ha_oracle_rg:cl_activate_vgs(.180):datavg[vgs_chk+25] clvaryonvg -n datavg
+ha_oracle_rg:clvaryonvg[616] [[ high == high ]]
+ha_oracle_rg:clvaryonvg[616] version=1.21.1.41
+ha_oracle_rg:clvaryonvg[618] LEAVEOFF=FALSE
+ha_oracle_rg:clvaryonvg[619] FORCEON=''
+ha_oracle_rg:clvaryonvg[620] FORCEUPD=FALSE
+ha_oracle_rg:clvaryonvg[621] NOQUORUM=20
+ha_oracle_rg:clvaryonvg[622] MISSING_UPDATES=30
+ha_oracle_rg:clvaryonvg[623] DATA_DIVERGENCE=31
+ha_oracle_rg:clvaryonvg[624] ARGS=''
+ha_oracle_rg:clvaryonvg[625] typeset -i varyonvg_rc
+ha_oracle_rg:clvaryonvg[626] typeset -i MAXLVS
+ha_oracle_rg:clvaryonvg[628] set -u
+ha_oracle_rg:clvaryonvg[630] /bin/dspmsg -s 2 cspoc.cat 31 'usage: clvaryonvg [-F] [-f] [-n] [-p] [-s] [-o] <vg>\n'
+ha_oracle_rg:clvaryonvg[630] USAGE='usage: clvaryonvg [-F] [-f] [-n] [-p] [-s] [-o] <vg>'
+ha_oracle_rg:clvaryonvg[631] (( 2 < 1 ))
+ha_oracle_rg:clvaryonvg[637] : Parse the options
+ha_oracle_rg:clvaryonvg[639] S_FLAG=''
+ha_oracle_rg:clvaryonvg[640] P_FLAG=''
+ha_oracle_rg:clvaryonvg[641] getopts :Ffnops option
+ha_oracle_rg:clvaryonvg[646] : -n Always applied, retained for compatibility
+ha_oracle_rg:clvaryonvg[641] getopts :Ffnops option
+ha_oracle_rg:clvaryonvg[656] : Pick up the volume group name, which follows the options
+ha_oracle_rg:clvaryonvg[658] shift 1
+ha_oracle_rg:clvaryonvg[659] VG=datavg
+ha_oracle_rg:clvaryonvg[662] : Set up filenames we will be using
+ha_oracle_rg:clvaryonvg[664] VGDIR=/usr/es/sbin/cluster/etc/vg/
+ha_oracle_rg:clvaryonvg[665] TSFILE=/usr/es/sbin/cluster/etc/vg/datavg
+ha_oracle_rg:clvaryonvg[666] DSFILE=/usr/es/sbin/cluster/etc/vg/.desc
+ha_oracle_rg:clvaryonvg[667] RPFILE=/usr/es/sbin/cluster/etc/vg/.replay
+ha_oracle_rg:clvaryonvg[668] permset=/usr/es/sbin/cluster/etc/vg/.perms
+ha_oracle_rg:clvaryonvg[671] : Without this test, cause of failure due to non-root may not be obvious
+ha_oracle_rg:clvaryonvg[673] sed -e s/^uid=// -e 's/(.*//'
+ha_oracle_rg:clvaryonvg[673] id
+ha_oracle_rg:clvaryonvg[673] UID=0
+ha_oracle_rg:clvaryonvg[673] typeset -i UID
+ha_oracle_rg:clvaryonvg[674] (( 0 != 0 ))
+ha_oracle_rg:clvaryonvg[680] : Get some LVM information we are going to need in processing this
+ha_oracle_rg:clvaryonvg[681] : volume group:
+ha_oracle_rg:clvaryonvg[682] : - volume group identifier - vgid
+ha_oracle_rg:clvaryonvg[683] : - list of disks
+ha_oracle_rg:clvaryonvg[684] : - quorum indicator
+ha_oracle_rg:clvaryonvg[685] : - timestamp if present
+ha_oracle_rg:clvaryonvg[687] /usr/sbin/getlvodm -v datavg
+ha_oracle_rg:clvaryonvg[687] VGID=0057d5ec00004c000000013d962e6972
+ha_oracle_rg:clvaryonvg[688] cut '-d ' -f2
+ha_oracle_rg:clvaryonvg[688] /usr/sbin/getlvodm -w 0057d5ec00004c000000013d962e6972
+ha_oracle_rg:clvaryonvg[688] pvlst=hdisk2
+ha_oracle_rg:clvaryonvg[689] /usr/sbin/getlvodm -Q datavg
+ha_oracle_rg:clvaryonvg[689] quorum=y
+ha_oracle_rg:clvaryonvg[690] TS_FROM_DISK=''
+ha_oracle_rg:clvaryonvg[691] TS_FROM_ODM=''
+ha_oracle_rg:clvaryonvg[692] GOOD_PV=''
+ha_oracle_rg:clvaryonvg[695] : Check on the current state of the volume group
+ha_oracle_rg:clvaryonvg[697] grep -x -q datavg
+ha_oracle_rg:clvaryonvg[697] lsvg -L
+ha_oracle_rg:clvaryonvg[699] : The volume group is known - check to see if its already varyd on.
+ha_oracle_rg:clvaryonvg[701] lsvg -L -o
+ha_oracle_rg:clvaryonvg[701] grep -x -q datavg
+ha_oracle_rg:clvaryonvg[777] :
+ha_oracle_rg:clvaryonvg[778] : First, sniff at the disk to see if the local ODM information
+ha_oracle_rg:clvaryonvg[779] : matches what is on the disk.
+ha_oracle_rg:clvaryonvg[780] :
+ha_oracle_rg:clvaryonvg[782] vgdatimestamps
+ha_oracle_rg:clvaryonvg[63] PS4_FUNC=vgdatimestamps
+ha_oracle_rg:clvaryonvg[63] typeset PS4_FUNC
+ha_oracle_rg:clvaryonvg[64] [[ high == high ]]
+ha_oracle_rg:clvaryonvg[64] set -x
+ha_oracle_rg:clvaryonvg[65] set -u
+ha_oracle_rg:clvaryonvg[68] : See what timestamp LVM has recorded from the last time it checked
+ha_oracle_rg:clvaryonvg[69] : the disks
+ha_oracle_rg:clvaryonvg[71] /usr/sbin/getlvodm -T 0057d5ec00004c000000013d962e6972
+ha_oracle_rg:clvaryonvg[71] 2> /dev/null
+ha_oracle_rg:clvaryonvg[71] TS_FROM_ODM=514d5d5e09175e00
+ha_oracle_rg:clvaryonvg[74] : Check to see if HACMP is maintaining a timestamp for this volume group
+ha_oracle_rg:clvaryonvg[75] : Needed for some older volume groups
+ha_oracle_rg:clvaryonvg[77] [[ -s /usr/es/sbin/cluster/etc/vg/datavg ]]
+ha_oracle_rg:clvaryonvg[96] : Get the time stamp from the actual disk
+ha_oracle_rg:clvaryonvg[98] clvgdats /dev/datavg
+ha_oracle_rg:clvaryonvg[98] 2> /dev/null
+ha_oracle_rg:clvaryonvg[98] TS_FROM_DISK=''
+ha_oracle_rg:clvaryonvg[99] clvgdats_rc=1
+ha_oracle_rg:clvaryonvg[100] (( 1 != 0 ))
+ha_oracle_rg:clvaryonvg[103] : If reading from the volume group fails, try the
+ha_oracle_rg:clvaryonvg[104] : individual disks.
+ha_oracle_rg:clvaryonvg[106] get_good_pv
+ha_oracle_rg:clvaryonvg[26] PS4_FUNC=get_good_pvH
+ha_oracle_rg:clvaryonvg[26] typeset PS4_FUNC
+ha_oracle_rg:clvaryonvg[27] [[ high == high ]]
+ha_oracle_rg:clvaryonvg[27] set -x
+ha_oracle_rg:clvaryonvg[28] set -u
+ha_oracle_rg:clvaryonvg[30] GOOD_PV=''
+ha_oracle_rg:clvaryonvg[31] cl_querypv_rc=0
+ha_oracle_rg:clvaryonvg[34] : Can we read the VGDA time stamp
+ha_oracle_rg:clvaryonvg[36] cl_querypv -q /dev/hdisk2
cl_querypv[222] openx(/dev/hdisk2, O_RDONLY, 0, SC_NO_RESERVE): Invalid argument
+ha_oracle_rg:clvaryonvg[37] cl_querypv_rc=255
+ha_oracle_rg:clvaryonvg[38] (( 255 == 0 ))
+ha_oracle_rg:clvaryonvg[46] : If we could not read from at least one disk in the
+ha_oracle_rg:clvaryonvg[47] : volume group, something has gone horribly wrong, so we bail.
+ha_oracle_rg:clvaryonvg[48] : export and import would not work, since they have to read
+ha_oracle_rg:clvaryonvg[49] : the disks, too
+ha_oracle_rg:clvaryonvg[51] (( 255 != 0 ))
+ha_oracle_rg:clvaryonvg[51] exit 1
+ha_oracle_rg:cl_activate_vgs(.400):datavg[vgs_chk+25] ERRMSG='cl_querypv[213] version 1.2'
+ha_oracle_rg:cl_activate_vgs(.400):datavg[vgs_chk+26] RC=1
+ha_oracle_rg:cl_activate_vgs(.400):datavg[vgs_chk+27] (( 1 == 1 ))
+ha_oracle_rg:cl_activate_vgs(.400):datavg[vgs_chk+31] cl_RMupdate resource_error datavg cl_activate_vgs
2013-03-25T19:21:49.972619
2013-03-25T19:21:49.989769
Reference string: Mon.Mar.25.19:21:49.BEIST.2013.cl_activate_vgs.datavg.ha_oracle_rg.ref
+ha_oracle_rg:cl_activate_vgs(.510):datavg[vgs_chk+33] cl_echo 21 'cl_activate_vgs: Failed clvaryonvg of datavg.' cl_activate_vgs datavg
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 25 2013 19:21:50
Mar 25 2013 19:21:50 +ha_oracle_rg:cl_echo[+96] MSG_ID=21
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 21 cl_activate_vgs: Failed clvaryonvg of datavg. cl_activate_vgs datavg
+ha_oracle_rg:cl_echo[+98] 1>& 2
cl_activate_vgs: Failed clvaryonvg of datavg.+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_activate_vgs(.610):datavg[vgs_chk+34] STATUS=1
+ha_oracle_rg:cl_activate_vgs(.610):datavg[vgs_chk+36] : exit status of clvaryonvg -n datavg: 1
+ha_oracle_rg:cl_activate_vgs(.610):datavg[vgs_chk+38] [[ -n 'cl_querypv[213] version 1.2' ]]
+ha_oracle_rg:cl_activate_vgs(.610):datavg[vgs_chk+38] (( 1 != 1 ))
+ha_oracle_rg:cl_activate_vgs(.610):datavg[vgs_chk+45] echo datavg 1
+ha_oracle_rg:cl_activate_vgs(.610):datavg[vgs_chk+45] 1>> /tmp/_activate_vgs.tmp
+ha_oracle_rg:cl_activate_vgs(.610):datavg[vgs_chk+46] return 1
+ha_oracle_rg:cl_activate_vgs[301] ALLNOERRVGS=All_nonerror_volume_groups
+ha_oracle_rg:cl_activate_vgs[302] [[ '' == EMUL ]]
+ha_oracle_rg:cl_activate_vgs[307] cl_RMupdate resource_up All_nonerror_volume_groups cl_activate_vgs
2013-03-25T19:21:50.172938
2013-03-25T19:21:50.187014
Reference string: Mon.Mar.25.19:21:50.BEIST.2013.cl_activate_vgs.All_nonerror_volume_groups.ha_oracle_rg.ref
+ha_oracle_rg:cl_activate_vgs[315] [[ -f /tmp/_activate_vgs.tmp ]]
+ha_oracle_rg:cl_activate_vgs[317] grep ' 1' /tmp/_activate_vgs.tmp
datavg 1
+ha_oracle_rg:cl_activate_vgs[319] [[ false == true ]]
+ha_oracle_rg:cl_activate_vgs[322] STATUS=1
+ha_oracle_rg:cl_activate_vgs[326] rm -f /tmp/_activate_vgs.tmp
+ha_oracle_rg:cl_activate_vgs[329] exit 1
+ha_oracle_rg:get_disk_vg_fs[501] exit 1
Mar 25 19:21:50 EVENT FAILED: 1: get_disk_vg_fs ALL datavg 1
+ha_oracle_rg:node_up_local[73] RC=1
+ha_oracle_rg:node_up_local[74] : exit status of get_disk_vg_fs ALL datavg is: 1
+ha_oracle_rg:node_up_local[75] (( 1 != 0 ))
+ha_oracle_rg:node_up_local[77] STATUS=2
+ha_oracle_rg:node_up_local[78] (( 1 == 7 ))
+ha_oracle_rg:node_up_local[81] return
+ha_oracle_rg:node_up_local[338] : exit status of get_filesystems is: 0
+ha_oracle_rg:node_up_local[339] (( 0 == 1 ))
+ha_oracle_rg:node_up_local[344] : Do the required NFS_mounts.
+ha_oracle_rg:node_up_local[346] [[ -n '' ]]
+ha_oracle_rg:node_up_local[407] : Finally, take care of concurrent volume groups.
+ha_oracle_rg:node_up_local[409] [[ -n '' ]]
+ha_oracle_rg:node_up_local[617] : Start tape resources
+ha_oracle_rg:node_up_local[619] [[ -n '' ]]
+ha_oracle_rg:node_up_local[630] : Start AIX Connections services
+ha_oracle_rg:node_up_local[632] [[ -n '' ]]
+ha_oracle_rg:node_up_local[643] : start commlink processing
+ha_oracle_rg:node_up_local[645] [[ -n '' ]]
+ha_oracle_rg:node_up_local[656] : Start UP SMB/FASTConnect resources
+ha_oracle_rg:node_up_local[658] [[ -n '' ]]
+ha_oracle_rg:node_up_local[668] (( 2 != 0 ))
+ha_oracle_rg:node_up_local[670] set_resource_status ERROR
+ha_oracle_rg:node_up_local[132] PS4_FUNC=set_resource_status
+ha_oracle_rg:node_up_local[132] typeset PS4_FUNC
+ha_oracle_rg:node_up_local[133] [[ high == high ]]
+ha_oracle_rg:node_up_local[133] set -x
+ha_oracle_rg:node_up_local[135] set +u
+ha_oracle_rg:node_up_local[136] eval TEMPNFS='$NFS_ha_oracle_rg'
+ha_oracle_rg:node_up_local[1] TEMPNFS=FALSE
+ha_oracle_rg:node_up_local[137] set -u
+ha_oracle_rg:node_up_local[139] [[ -z FALSE ]]
+ha_oracle_rg:node_up_local[139] [[ FALSE == FALSE ]]
+ha_oracle_rg:node_up_local[141] clchdaemons -d clstrmgr_scripts -t resource_locator -n ha_db02 -o ha_oracle_rg -v ERROR
+ha_oracle_rg:node_up_local[148] : Resource Manager Updates
+ha_oracle_rg:node_up_local[150] [[ ERROR == ACQUIRING ]]
+ha_oracle_rg:node_up_local[162] [[ NONE == ACQUIRE_SECONDARY ]]
+ha_oracle_rg:node_up_local[165] [[ NONE == PRIMARY_BECOMES_SECONDARY ]]
+ha_oracle_rg:node_up_local[169] cl_RMupdate rg_error ha_oracle_rg node_up_local
2013-03-25T19:21:50.349509
2013-03-25T19:21:50.363614
Reference string: Mon.Mar.25.19:21:50.BEIST.2013.node_up_local.ha_oracle_rg.ref
+ha_oracle_rg:node_up_local[172] (( 0 != 0 ))
+ha_oracle_rg:node_up_local[671] : exit status of set_resource_status is: 1
+ha_oracle_rg:node_up_local[674] (( 2 == 2 ))
+ha_oracle_rg:node_up_local[676] exit 0
Mar 25 19:21:50 EVENT COMPLETED: node_up_local 0
+ha_oracle_rg:rg_move[+241] [ 0 -ne 0 ]
+ha_oracle_rg:rg_move[+247] UPDATESTATD=1
+ha_oracle_rg:rg_move[+254] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-25T19:21:50.513364 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] GROUPNAME=ha_oracle_rg
+ha_oracle_rg:process_resources[2444] export GROUPNAME
+ha_oracle_rg:process_resources[2748] break
+ha_oracle_rg:process_resources[2759] : If sddsrv was turned off above, turn it back on again
+ha_oracle_rg:process_resources[2761] [[ FALSE == TRUE ]]
+ha_oracle_rg:process_resources[2767] exit 0
+ha_oracle_rg:rg_move[+292] [ -f /tmp/.NFSSTOPPED ]
+ha_oracle_rg:rg_move[+312] [ -f /tmp/.RPCLOCKDSTOPPED ]
+ha_oracle_rg:rg_move[+337] exit 0
Mar 25 19:21:50 EVENT COMPLETED: rg_move ha_db01 1 ACQUIRE 0
:rg_move_acquire[+68] exit_status=0
:rg_move_acquire[+69] : exit status of clcallev rg_move ha_db01 1 ACQUIRE is: 0
:rg_move_acquire[+70] exit 0
Mar 25 19:21:50 EVENT COMPLETED: rg_move_acquire ha_db01 1 0
Mar 25 19:21:52 EVENT START: rg_move_complete ha_db01 1
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=00525B6F4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db02
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db02 ]]
:get_local_nodename[+69] [[ ha_db02 = ha_db02 ]]
:get_local_nodename[+72] print ha_db02
:get_local_nodename[+73] exit 0
:rg_move_complete[+94] [[ high = high ]]
:rg_move_complete[+94] version=1.37
:rg_move_complete[+95] :rg_move_complete[+95] cl_get_path
HA_DIR=es
:rg_move_complete[+104] STATUS=0
:rg_move_complete[+106] [ ! -n ]
:rg_move_complete[+108] EMULATE=REAL
:rg_move_complete[+111] set -u
:rg_move_complete[+113] [ 2 -lt 2 -o 2 -gt 3 ]
:rg_move_complete[+119] export NODENAME=ha_db01
:rg_move_complete[+120] RGID=1
:rg_move_complete[+121] [ 2 -eq 3 ]
:rg_move_complete[+125] RGDESTINATION=
:rg_move_complete[+130] odmget -qid=1 HACMPgroup
:rg_move_complete[+130] egrep group =
:rg_move_complete[+130] awk {print $3}
:rg_move_complete[+130] eval RGNAME="ha_oracle_rg"
:rg_move_complete[+130] RGNAME=ha_oracle_rg
:rg_move_complete[+131] GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move_complete[+133] UPDATESTATD=0
+ha_oracle_rg:rg_move_complete[+134] NFSSTOPPED=0
+ha_oracle_rg:rg_move_complete[+138] odmget HACMPnode
+ha_oracle_rg:rg_move_complete[+138] grep name =
+ha_oracle_rg:rg_move_complete[+138] sort
+ha_oracle_rg:rg_move_complete[+138] uniq
+ha_oracle_rg:rg_move_complete[+138] wc -l
+ha_oracle_rg:rg_move_complete[+138] [ 2 -eq 2 ]
+ha_oracle_rg:rg_move_complete[+141] +ha_oracle_rg:rg_move_complete[+141] odmget HACMPgroup
+ha_oracle_rg:rg_move_complete[+141] grep group =
+ha_oracle_rg:rg_move_complete[+141] awk {print $3}
+ha_oracle_rg:rg_move_complete[+141] sed s/"//g
RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:rg_move_complete[+145] +ha_oracle_rg:rg_move_complete[+145] odmget -q group=ha_oracle_rg AND name=EXPORT_FILESYSTEM HACMPresource
+ha_oracle_rg:rg_move_complete[+145] grep value =
+ha_oracle_rg:rg_move_complete[+145] awk {print $3}
+ha_oracle_rg:rg_move_complete[+145] sed s/"//g
EXPORTLIST=
+ha_oracle_rg:rg_move_complete[+145] [[ -n ]]
+ha_oracle_rg:rg_move_complete[+170] set -a
+ha_oracle_rg:rg_move_complete[+171] +ha_oracle_rg:rg_move_complete[+171] clsetenvgrp ha_db01 rg_move_complete ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db01 rg_move_complete ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
+ha_oracle_rg:rg_move_complete[+172] RC=0
+ha_oracle_rg:rg_move_complete[+173] eval FORCEDOWN_GROUPS=""
RESOURCE_GROUPS=""
HOMELESS_GROUPS=""
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS=""
ASSOCIATE_ACTIONS=""
AUXILLIARY_ACTIONS=""
+ha_oracle_rg:rg_move_complete[+173] FORCEDOWN_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] RESOURCE_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] HOMELESS_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] HOMELESS_FOLLOWER_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] ERRSTATE_GROUPS=
+ha_oracle_rg:rg_move_complete[+173] PRINCIPAL_ACTIONS=
+ha_oracle_rg:rg_move_complete[+173] ASSOCIATE_ACTIONS=
+ha_oracle_rg:rg_move_complete[+173] AUXILLIARY_ACTIONS=
+ha_oracle_rg:rg_move_complete[+174] set +a
+ha_oracle_rg:rg_move_complete[+175] [ 0 -ne 0 ]
+ha_oracle_rg:rg_move_complete[+249] [ 0 = 1 ]
+ha_oracle_rg:rg_move_complete[+286] [ 0 = 1 ]
+ha_oracle_rg:rg_move_complete[+362] process_resources
:process_resources[2423] [[ high == high ]]
:process_resources[2423] version=1.125
:process_resources[2425] STATUS=0
:process_resources[2426] sddsrv_off=FALSE
:process_resources[2428] true
:process_resources[2430] : call rgpa, and it will tell us what to do next
:process_resources[2432] set -a
:process_resources[2433] clRGPA
:clRGPA[+49] [[ high = high ]]
:clRGPA[+49] version=1.16
:clRGPA[+51] usingVer=clrgpa
:clRGPA[+56] clrgpa
2013-03-25T19:21:53.095304 clrgpa
:clRGPA[+57] exit 0
:process_resources[2433] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[2434] RC=0
:process_resources[2435] set +a
:process_resources[2437] (( 0 != 0 ))
:process_resources[2443] RESOURCE_GROUPS=''
:process_resources[2444] GROUPNAME=''
:process_resources[2444] export GROUPNAME
:process_resources[2748] break
:process_resources[2759] : If sddsrv was turned off above, turn it back on again
:process_resources[2761] [[ FALSE == TRUE ]]
:process_resources[2767] exit 0
+ha_oracle_rg:rg_move_complete[+363] STATUS=0
+ha_oracle_rg:rg_move_complete[+364] : exit status of process_resources is: 0
+ha_oracle_rg:rg_move_complete[+368] [[ FALSE = TRUE ]]
+ha_oracle_rg:rg_move_complete[+392] exit 0
Mar 25 19:21:53 EVENT COMPLETED: rg_move_complete ha_db01 1 0
HACMP Event Summary
Event: TE_RG_MOVE
Start time: Mon Mar 25 19:21:14 2013
End time: Mon Mar 25 19:21:53 2013
Action: Resource: Script Name:
----------------------------------------------------------------------------
Acquiring resource group: ha_oracle_rg node_up_local
Search on: Mon.Mar.25.19:21:44.BEIST.2013.node_up_local.ha_oracle_rg.ref
Acquiring resource: All_service_addrs acquire_service_addr
Search on: Mon.Mar.25.19:21:44.BEIST.2013.acquire_service_addr.All_service_addrs.ha_oracle_rg.ref
Resource online: All_nonerror_service_addrs acquire_service_addr
Search on: Mon.Mar.25.19:21:46.BEIST.2013.acquire_service_addr.All_nonerror_service_addrs.ha_oracle_rg.ref
Acquiring resource: All_disks cl_disk_available
Search on: Mon.Mar.25.19:21:48.BEIST.2013.cl_disk_available.All_disks.ha_oracle_rg.ref
Resource online: All_nonerror_disks cl_disk_available
Search on: Mon.Mar.25.19:21:49.BEIST.2013.cl_disk_available.All_nonerror_disks.ha_oracle_rg.ref
Acquiring resource: All_volume_groups cl_activate_vgs
Search on: Mon.Mar.25.19:21:49.BEIST.2013.cl_activate_vgs.All_volume_groups.ha_oracle_rg.ref
Error encountered with resource: datavg cl_activate_vgs
Search on: Mon.Mar.25.19:21:49.BEIST.2013.cl_activate_vgs.datavg.ha_oracle_rg.ref
Resource online: All_nonerror_volume_groups cl_activate_vgs
Search on: Mon.Mar.25.19:21:50.BEIST.2013.cl_activate_vgs.All_nonerror_volume_groups.ha_oracle_rg.ref
Error encountered with group: ha_oracle_rg node_up_local
Search on: Mon.Mar.25.19:21:50.BEIST.2013.node_up_local.ha_oracle_rg.ref
----------------------------------------------------------------------------
Mar 25 19:21:53 EVENT START: network_down ha_db02 net_diskhb_01
:network_down[+62] [[ high = high ]]
:network_down[+62] version=1.23
:network_down[+63] :network_down[+63] cl_get_path
HA_DIR=es
:network_down[+65] [ 2 -ne 2 ]
:network_down[+77] :network_down[+77] cl_rrmethods2call net_cleanup
:cl_rrmethods2call[+49] [[ high = high ]]
:cl_rrmethods2call[+49] version=1.17
:cl_rrmethods2call[+50] :cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
:cl_rrmethods2call[+76] RRMETHODS=
:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
:cl_rrmethods2call[+83] :cl_rrmethods2call[+83] odmget -qname=net_diskhb_01 HACMPnetwork
:cl_rrmethods2call[+83] egrep nimname
:cl_rrmethods2call[+83] awk {print $3}
:cl_rrmethods2call[+83] sed s/"//g
RRNET=diskhb
:cl_rrmethods2call[+83] [[ diskhb = XD_data ]]
:cl_rrmethods2call[+89] exit 0
METHODS=
:network_down[+91] set -u
:network_down[+104] exit 0
Mar 25 19:21:53 EVENT COMPLETED: network_down ha_db02 net_diskhb_01 0
Mar 25 19:21:54 EVENT START: network_down_complete ha_db02 net_diskhb_01
:network_down_complete[+61] [[ high = high ]]
:network_down_complete[+61] version=1.1.1.13
:network_down_complete[+62] :network_down_complete[+62] cl_get_path
HA_DIR=es
:network_down_complete[+64] [ ! -n ]
:network_down_complete[+66] EMULATE=REAL
:network_down_complete[+69] [ 2 -ne 2 ]
:network_down_complete[+75] set -u
:network_down_complete[+81] STATUS=0
:network_down_complete[+85] odmget HACMPnode
:network_down_complete[+85] grep name =
:network_down_complete[+85] sort
:network_down_complete[+85] uniq
:network_down_complete[+85] wc -l
:network_down_complete[+85] [ 2 -eq 2 ]
:network_down_complete[+87] :network_down_complete[+87] odmget HACMPgroup
:network_down_complete[+87] grep group =
:network_down_complete[+87] awk {print $3}
:network_down_complete[+87] sed s/"//g
RESOURCE_GROUPS=ha_oracle_rg
:network_down_complete[+91] :network_down_complete[+91] odmget -q group=ha_oracle_rg AND name=EXPORT_FILESYSTEM HACMPresource
:network_down_complete[+91] grep value
:network_down_complete[+91] awk {print $3}
:network_down_complete[+91] sed s/"//g
EXPORTLIST=
:network_down_complete[+92] [ -n ]
:network_down_complete[+114] cl_hb_alias_network net_diskhb_01 add
:cl_hb_alias_network[+57] [[ high = high ]]
:cl_hb_alias_network[+57] version=1.4
:cl_hb_alias_network[+58] :cl_hb_alias_network[+58] cl_get_path
HA_DIR=es
:cl_hb_alias_network[+59] :cl_hb_alias_network[+59] cl_get_path -S
OP_SEP=~
:cl_hb_alias_network[+61] NETWORK=net_diskhb_01
:cl_hb_alias_network[+62] ACTION=add
:cl_hb_alias_network[+65] [[ 2 != 2 ]]
:cl_hb_alias_network[+71] [[ add != add ]]
:cl_hb_alias_network[+77] set -u
:cl_hb_alias_network[+79] cl_echo 33 Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_diskhb_01 add\n /usr/es/sbin/cluster/utilities/cl_hb_alias_network net_diskhb_01 add
:cl_echo[+35] version=1.16
:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
:cl_echo[+89] set +u
:cl_echo[+89] [[ -n ]]
:cl_echo[+92] set -u
:cl_echo[+95] print -n -u2 Mar 25 2013 19:21:54
Mar 25 2013 19:21:54 :cl_echo[+96] MSG_ID=33
:cl_echo[+97] shift
:cl_echo[+98] dspmsg scripts.cat 33 Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_diskhb_01 add\n /usr/es/sbin/cluster/utilities/cl_hb_alias_network net_diskhb_01 add
:cl_echo[+98] 1>& 2
Starting execution of /usr/es/sbin/cluster/utilities/cl_hb_alias_network with parameters net_diskhb_01 add
:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
:cl_echo[+101] 1> /dev/null 2>& 1
:cl_echo[+105] return 0
:cl_hb_alias_network[+80] date
Mon Mar 25 19:21:54 BEIST 2013
:cl_hb_alias_network[+82] :cl_hb_alias_network[+82] get_local_nodename
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=00525B6F4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db02
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db02 ]]
:get_local_nodename[+69] [[ ha_db02 = ha_db02 ]]
:get_local_nodename[+72] print ha_db02
:get_local_nodename[+73] exit 0
LOCALNODENAME=ha_db02
:cl_hb_alias_network[+83] STATUS=0
:cl_hb_alias_network[+86] cllsnw -J ~ -Sn net_diskhb_01
:cl_hb_alias_network[+86] cut -d~ -f4
:cl_hb_alias_network[+86] grep -q hb_over_alias
:cl_hb_alias_network[+86] exit 0
:network_down_complete[+120] exit 0
Mar 25 19:21:54 EVENT COMPLETED: network_down_complete ha_db02 net_diskhb_01 0
HACMP Event Summary
Event: TE_FAIL_NETWORK
Start time: Mon Mar 25 19:21:53 2013
End time: Mon Mar 25 19:21:54 2013
Action: Resource: Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------
Mar 25 19:21:56 EVENT START: rg_move_release ha_db02 1
:rg_move_release[+54] [[ high = high ]]
:rg_move_release[+54] version=1.6
:rg_move_release[+56] set -u
:rg_move_release[+58] [ 2 != 2 ]
:rg_move_release[+64] set +u
:rg_move_release[+66] clcallev rg_move ha_db02 1 RELEASE
Mar 25 19:21:56 EVENT START: rg_move ha_db02 1 RELEASE
:get_local_nodename[+42] [[ high = high ]]
:get_local_nodename[+42] version=1.2.1.19
:get_local_nodename[+43] :get_local_nodename[+43] cl_get_path
HA_DIR=es
:get_local_nodename[+44] :get_local_nodename[+44] cl_get_path -S
OP_SEP=~
:get_local_nodename[+46] AIXODMDIR=/etc/objrepos
:get_local_nodename[+47] HAODMDIR=/etc/es/objrepos
:get_local_nodename[+49] :get_local_nodename[+49] uname -m
UNAME=00525B6F4C00
:get_local_nodename[+55] export PLATFORM=__AIX__
:get_local_nodename[+61] export ODMDIR=/etc/es/objrepos
:get_local_nodename[+63] :get_local_nodename[+63] /usr/es/sbin/cluster/utilities/cllsclstr -N
nodename=ha_db02
:get_local_nodename[+65] :get_local_nodename[+65] cllsnode -cS
:get_local_nodename[+65] cut -d: -f1
NODENAME=ha_db01
ha_db02
:get_local_nodename[+69] [[ ha_db01 = ha_db02 ]]
:get_local_nodename[+69] [[ ha_db02 = ha_db02 ]]
:get_local_nodename[+72] print ha_db02
:get_local_nodename[+73] exit 0
:rg_move[+71] version=1.49
:rg_move[+81] STATUS=0
:rg_move[+83] [ ! -n ]
:rg_move[+85] EMULATE=REAL
:rg_move[+91] set -u
:rg_move[+93] export NODENAME=ha_db02
:rg_move[+94] RGID=1
:rg_move[+95] [[ 3 = 3 ]]
:rg_move[+97] ACTION=RELEASE
:rg_move[+104] odmget -qid=1 HACMPgroup
:rg_move[+104] egrep group =
:rg_move[+104] awk {print $3}
:rg_move[+104] eval RGNAME="ha_oracle_rg"
:rg_move[+104] RGNAME=ha_oracle_rg
:rg_move[+106] UPDATESTATD=0
:rg_move[+107] export UPDATESTATD
:rg_move[+111] export RG_MOVE_EVENT=true
:rg_move[+116] group_state=$RESGRP_ha_oracle_rg_ha_db02
:rg_move[+117] set +u
:rg_move[+118] eval print $RESGRP_ha_oracle_rg_ha_db02
:rg_move[+118] print ERROR
:rg_move[+118] export RG_MOVE_ONLINE=ERROR
:rg_move[+119] set -u
:rg_move[+120] RG_MOVE_ONLINE=ERROR
:rg_move[+127] rm -f /tmp/.NFSSTOPPED
:rg_move[+128] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[+135] set -a
:rg_move[+136] :rg_move[+136] clsetenvgrp ha_db02 rg_move ha_oracle_rg
:clsetenvgrp[+50] [[ high = high ]]
:clsetenvgrp[+50] version=1.16
:clsetenvgrp[+52] usingVer=clSetenvgrp
:clsetenvgrp[+57] clSetenvgrp ha_db02 rg_move ha_oracle_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+58] exit 0
clsetenvgrp_output=IPAT_ha_oracle_rg="TRUE"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="R"
ASSOCIATE_ACTIONS="S"
AUXILLIARY_ACTIONS="N"
:rg_move[+137] RC=0
:rg_move[+138] eval IPAT_ha_oracle_rg="TRUE"
FORCEDOWN_GROUPS=""
RESOURCE_GROUPS="ha_oracle_rg"
HOMELESS_GROUPS="ha_oracle_rg"
HOMELESS_FOLLOWER_GROUPS=""
ERRSTATE_GROUPS=""
PRINCIPAL_ACTIONS="R"
ASSOCIATE_ACTIONS="S"
AUXILLIARY_ACTIONS="N"
:rg_move[+138] IPAT_ha_oracle_rg=TRUE
:rg_move[+138] FORCEDOWN_GROUPS=
:rg_move[+138] RESOURCE_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_GROUPS=ha_oracle_rg
:rg_move[+138] HOMELESS_FOLLOWER_GROUPS=
:rg_move[+138] ERRSTATE_GROUPS=
:rg_move[+138] PRINCIPAL_ACTIONS=R
:rg_move[+138] ASSOCIATE_ACTIONS=S
:rg_move[+138] AUXILLIARY_ACTIONS=N
:rg_move[+139] set +a
:rg_move[+143] [[ 0 -ne 0 ]]
:rg_move[+143] [[ -z ha_oracle_rg ]]
:rg_move[+152] [[ -z FALSE ]]
:rg_move[+198] set -a
:rg_move[+199] clsetenvres ha_oracle_rg rg_move
:rg_move[+199] eval PRINCIPAL_ACTION="RELEASE" ASSOCIATE_ACTION="SUSTAIN" AUXILLIARY_ACTION="NONE" VG_RR_ACTION="RELEASE" SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION="NONE" NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS="ha_app" FILESYSTEM="ALL" FORCED_VARYON="false" FSCHECK_TOOL="fsck" FS_BEFORE_IPADDR="false" RECOVERY_METHOD="sequential" SERVICE_LABEL="ser" SSA_DISK_FENCING="false" VG_AUTO_IMPORT="false" VOLUME_GROUP="datavg"
:rg_move[+199] PRINCIPAL_ACTION=RELEASE ASSOCIATE_ACTION=SUSTAIN AUXILLIARY_ACTION=NONE VG_RR_ACTION=RELEASE SIBLING_NODES= SIBLING_NONOWNER_NODES= SIBLING_ACQUIRING_NODES= SIBLING_RELEASING_NODES= FOLLOWER_ACTION=NONE NFS_HOST= DISK= CONCURRENT_VOLUME_GROUP= EXPORT_FILESYSTEM= EXPORT_FILESYSTEM_V4= STABLE_STORAGE_PATH= STABLE_STORAGE_COOKIE= AIX_CONNECTIONS_SERVICES= AIX_FAST_CONNECT_SERVICES= SNA_CONNECTIONS= COMMUNICATION_LINKS= SHARED_TAPE_RESOURCES= MOUNT_FILESYSTEM= INACTIVE_TAKEOVER= TAKEOVER_LABEL= NFSMOUNT_LABEL= MISC_DATA= NFS_NETWORK= SHARED_TAPE_RESOURCES= PPRC_REP_RESOURCE= ERCMF_REP_RESOURCE= SVCPPRC_REP_RESOURCE= SR_REP_RESOURCE= TC_REP_RESOURCE= GMD_REP_RESOURCE= GMVG_REP_RESOURCE= OEM_VOLUME_GROUP= OEM_FILESYSTEM= APPLICATIONS=ha_app FILESYSTEM=ALL FORCED_VARYON=false FSCHECK_TOOL=fsck FS_BEFORE_IPADDR=false RECOVERY_METHOD=sequential SERVICE_LABEL=ser SSA_DISK_FENCING=false VG_AUTO_IMPORT=false VOLUME_GROUP=datavg
:rg_move[+200] set +a
:rg_move[+201] export GROUPNAME=ha_oracle_rg
+ha_oracle_rg:rg_move[+201] [[ RELEASE = ]]
+ha_oracle_rg:rg_move[+225] [ RELEASE = RELEASE ]
+ha_oracle_rg:rg_move[+227] [ RELEASE = RELEASE ]
+ha_oracle_rg:rg_move[+229] clcallev node_down_local
Mar 25 19:21:56 EVENT START: node_down_local
+ha_oracle_rg:node_down_local[154] [[ high == high ]]
+ha_oracle_rg:node_down_local[154] version=1.2.1.94
+ha_oracle_rg:node_down_local[156] NOT_RELEASE=''
+ha_oracle_rg:node_down_local[157] STATUS=0
+ha_oracle_rg:node_down_local[157] typeset -i STATUS
+ha_oracle_rg:node_down_local[159] EMULATE=REAL
+ha_oracle_rg:node_down_local[161] (( 0 !=0 ))
+ha_oracle_rg:node_down_local[169] set +u
+ha_oracle_rg:node_down_local[170] RESOURCES_CLEANUP=''
+ha_oracle_rg:node_down_local[171] set -u
+ha_oracle_rg:node_down_local[174] set +u
+ha_oracle_rg:node_down_local[175] OEM_FILESYSTEM=''
+ha_oracle_rg:node_down_local[176] OEM_VOLUME_GROUP=''
+ha_oracle_rg:node_down_local[179] eval echo '${RESGRP_ha_oracle_rg_ha_db02}'
+ha_oracle_rg:node_down_local[1] echo ERROR
+ha_oracle_rg:node_down_local[179] read group_state
+ha_oracle_rg:node_down_local[183] [[ '' != CLEANUP ]]
+ha_oracle_rg:node_down_local[190] eval 'echo ${RESGRP_ha_oracle_rg_ha_db02}'
+ha_oracle_rg:node_down_local[1] echo ERROR
+ha_oracle_rg:node_down_local[190] read state
+ha_oracle_rg:node_down_local[191] [[ ERROR != TMP_ERROR ]]
+ha_oracle_rg:node_down_local[192] [[ ERROR != ONLINE ]]
+ha_oracle_rg:node_down_local[193] [[ ERROR != ERROR ]]
+ha_oracle_rg:node_down_local[203] set_resource_status RELEASING
+ha_oracle_rg:node_down_local[4] set +u
+ha_oracle_rg:node_down_local[5] NOT_DOIT=''
+ha_oracle_rg:node_down_local[6] set -u
+ha_oracle_rg:node_down_local[8] [[ '' == CLEANUP ]]
+ha_oracle_rg:node_down_local[12] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[14] [[ REAL == EMUL ]]
+ha_oracle_rg:node_down_local[19] clchdaemons -d clstrmgr_scripts -t resource_locator -n ha_db02 -o ha_oracle_rg -v RELEASING
+ha_oracle_rg:node_down_local[28] [[ RELEASING == RELEASING ]]
+ha_oracle_rg:node_down_local[30] [[ NONE == RELEASE_SECONDARY ]]
+ha_oracle_rg:node_down_local[31] [[ NONE == SECONDARY_BECOMES_PRIMARY ]]
+ha_oracle_rg:node_down_local[35] cl_RMupdate releasing ha_oracle_rg node_down_local
2013-03-25T19:21:57.023621
2013-03-25T19:21:57.037848
Reference string: Mon.Mar.25.19:21:56.BEIST.2013.node_down_local.ha_oracle_rg.ref
+ha_oracle_rg:node_down_local[213] [[ -z ERROR ]]
+ha_oracle_rg:node_down_local[220] set -u
+ha_oracle_rg:node_down_local[225] [[ -n ha_app ]]
+ha_oracle_rg:node_down_local[232] TMPLIST=''
+ha_oracle_rg:node_down_local[233] [[ -n ha_app ]]
+ha_oracle_rg:node_down_local[234] print ha_app
+ha_oracle_rg:node_down_local[234] read first_one APPLICATIONS
+ha_oracle_rg:node_down_local[235] TMPLIST=ha_app
+ha_oracle_rg:node_down_local[233] [[ -n '' ]]
+ha_oracle_rg:node_down_local[238] APPLICATIONS=ha_app
+ha_oracle_rg:node_down_local[241] [[ REAL == EMUL ]]
+ha_oracle_rg:node_down_local[246] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[247] clcallev stop_server ha_app
Mar 25 19:21:57 EVENT START: stop_server ha_app
+ha_oracle_rg:stop_server[+48] [[ high = high ]]
+ha_oracle_rg:stop_server[+48] version=1.4.1.13
+ha_oracle_rg:stop_server[+49] +ha_oracle_rg:stop_server[+49] cl_get_path
HA_DIR=es
+ha_oracle_rg:stop_server[+51] STATUS=0
+ha_oracle_rg:stop_server[+55] [ ! -n ]
+ha_oracle_rg:stop_server[+57] EMULATE=REAL
+ha_oracle_rg:stop_server[+60] PROC_RES=false
+ha_oracle_rg:stop_server[+64] [[ 0 != 0 ]]
+ha_oracle_rg:stop_server[+68] typeset WPARNAME WPARDIR EXEC
+ha_oracle_rg:stop_server[+69] WPARDIR=
+ha_oracle_rg:stop_server[+70] EXEC=
+ha_oracle_rg:stop_server[+72] +ha_oracle_rg:stop_server[+72] clwparname ha_oracle_rg
+ha_oracle_rg:clwparname[35] [[ high == high ]]
+ha_oracle_rg:clwparname[35] version=1.3
+ha_oracle_rg:clwparname[37] . /usr/es/sbin/cluster/wpar/wpar_utils
+ha_oracle_rg:clwparname[+20] ERRNO=0
+ha_oracle_rg:clwparname[+22] [[ high == high ]]
+ha_oracle_rg:clwparname[+22] set -x
+ha_oracle_rg:clwparname[+23] [[ high == high ]]
+ha_oracle_rg:clwparname[+23] version=1.10
+ha_oracle_rg:clwparname[+25] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+ha_oracle_rg:clwparname[+20] [[ high == high ]]
+ha_oracle_rg:clwparname[+20] set -x
+ha_oracle_rg:clwparname[+21] [[ high == high ]]
+ha_oracle_rg:clwparname[+21] version=1.4
+ha_oracle_rg:clwparname[+23] PATH=/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/events/emulate:/usr/es/sbin/cluster/events/emulate/driver:/usr/es/sbin/cluster/events/emulate/utils:/usr/es/sbin/cluster/tguides/bin:/usr/es/sbin/cluster/tguides/classes:/usr/es/sbin/cluster/tguides/images:/usr/es/sbin/cluster/tguides/scripts:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+ha_oracle_rg:clwparname[+24] export PATH
+ha_oracle_rg:clwparname[+26] typeset usageErr invalArgErr internalErr
+ha_oracle_rg:clwparname[+28] usageErr=10
+ha_oracle_rg:clwparname[+29] invalArgErr=11
+ha_oracle_rg:clwparname[+30] internalErr=12
+ha_oracle_rg:clwparname[+39] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[+42] uname
+ha_oracle_rg:clwparname[+42] OSNAME=AIX
+ha_oracle_rg:clwparname[+51] [[ AIX == *AIX* ]]
+ha_oracle_rg:clwparname[+54] lslpp -l bos.wpars
+ha_oracle_rg:clwparname[+54] 1> /dev/null 2>& 1
+ha_oracle_rg:clwparname[+56] loadWparName ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+5] usage='Usage: loadWparName <resource group name>'
+ha_oracle_rg:clwparname[loadWparName+5] typeset -r usage
+ha_oracle_rg:clwparname[loadWparName+6] typeset rgName wparName wparDir rc
+ha_oracle_rg:clwparname[loadWparName+8] [[ 1 < 1 ]]
+ha_oracle_rg:clwparname[loadWparName+13] rgName=ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+14] wparDir=/var/hacmp/adm/wpar
+ha_oracle_rg:clwparname[loadWparName+15] [[ ! -d /var/hacmp/adm/wpar ]]
+ha_oracle_rg:clwparname[loadWparName+22] [[ -f /var/hacmp/adm/wpar/ha_oracle_rg ]]
+ha_oracle_rg:clwparname[loadWparName+23] cat /var/hacmp/adm/wpar/ha_oracle_rg
+ha_oracle_rg:clwparname[loadWparName+23] wparName=''
+ha_oracle_rg:clwparname[loadWparName+24] [[ -n '' ]]
+ha_oracle_rg:clwparname[loadWparName+36] return 0
+ha_oracle_rg:clwparname[+56] wparName=''
+ha_oracle_rg:clwparname[+57] rc=0
+ha_oracle_rg:clwparname[+58] (( 0 != 0 ))
+ha_oracle_rg:clwparname[+64] printf %s
+ha_oracle_rg:clwparname[+65] exit 0
WPARNAME=
+ha_oracle_rg:stop_server[+74] set -u
+ha_oracle_rg:stop_server[+77] ALLSERVERS=All_servers
+ha_oracle_rg:stop_server[+78] [ REAL = EMUL ]
+ha_oracle_rg:stop_server[+83] cl_RMupdate resource_releasing All_servers stop_server
2013-03-25T19:21:57.271807
2013-03-25T19:21:57.285588
Reference string: Mon.Mar.25.19:21:57.BEIST.2013.stop_server.All_servers.ha_oracle_rg.ref
+ha_oracle_rg:stop_server[+88] [[ -n ]]
+ha_oracle_rg:stop_server[+107] +ha_oracle_rg:stop_server[+107] cllsserv -cn ha_app
+ha_oracle_rg:stop_server[+107] cut -d: -f3
STOP=/usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+108] +ha_oracle_rg:stop_server[+108] echo /usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+108] cut -d -f1
STOP_SCRIPT=/usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+110] PATTERN=ha_db02 ha_app
+ha_oracle_rg:stop_server[+110] [[ -n ]]
+ha_oracle_rg:stop_server[+110] [[ -z ]]
+ha_oracle_rg:stop_server[+110] [[ -x /usr/hascript/oracle_stop ]]
+ha_oracle_rg:stop_server[+120] [ REAL = EMUL ]
+ha_oracle_rg:stop_server[+125] /usr/hascript/oracle_stop
+ha_oracle_rg:stop_server[+125] ODMDIR=/etc/objrepos
db02:The ORACLE Server is stopping,Please Waiting.
SQL*Plus: Release 11.2.0.1.0 Production on Mon Mar 25 19:21:57 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> Connected to an idle instance.
SQL> ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
IBM AIX RISC System/6000 Error: 2: No such file or directory
SQL> Disconnected
LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.1.0 - Production on 25-MAR-2013 19:22:02
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ser)(PORT=1521)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
IBM/AIX RISC System/6000 Error: 79: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
IBM/AIX RISC System/6000 Error: 2: No such file or directory
db02:The ORACLE Server is stoped.
+ha_oracle_rg:stop_server[+127] [ 0 -ne 0 ]
+ha_oracle_rg:stop_server[+155] ALLNOERRSERV=All_nonerror_servers
+ha_oracle_rg:stop_server[+156] [ REAL = EMUL ]
+ha_oracle_rg:stop_server[+161] cl_RMupdate resource_down All_nonerror_servers stop_server
2013-03-25T19:22:02.856800
2013-03-25T19:22:02.870572
Reference string: Mon.Mar.25.19:22:02.BEIST.2013.stop_server.All_nonerror_servers.ha_oracle_rg.ref
+ha_oracle_rg:stop_server[+164] exit 0
Mar 25 19:22:02 EVENT COMPLETED: stop_server ha_app 0
+ha_oracle_rg:node_down_local[258] server_release_lpar_resources ha_app
+ha_oracle_rg:server_release_lpar_resources[721] [[ high == high ]]
+ha_oracle_rg:server_release_lpar_resources[721] version=1.14.1.15
+ha_oracle_rg:server_release_lpar_resources[723] typeset HOSTNAME
+ha_oracle_rg:server_release_lpar_resources[724] typeset MANAGED_SYSTEM
+ha_oracle_rg:server_release_lpar_resources[725] typeset HMC_IP
+ha_oracle_rg:server_release_lpar_resources[726] added_apps=''
+ha_oracle_rg:server_release_lpar_resources[726] typeset added_apps
+ha_oracle_rg:server_release_lpar_resources[727] APPLICATIONS=''
+ha_oracle_rg:server_release_lpar_resources[727] typeset APPLICATIONS
+ha_oracle_rg:server_release_lpar_resources[728] mem_release_type=''
+ha_oracle_rg:server_release_lpar_resources[728] typeset mem_release_type
+ha_oracle_rg:server_release_lpar_resources[730] mem_resource=0
+ha_oracle_rg:server_release_lpar_resources[730] typeset mem_resource
+ha_oracle_rg:server_release_lpar_resources[731] cpu_resource=0
+ha_oracle_rg:server_release_lpar_resources[731] typeset cpu_resource
+ha_oracle_rg:server_release_lpar_resources[732] cuod_mem_resource=0
+ha_oracle_rg:server_release_lpar_resources[732] typeset cuod_mem_resource
+ha_oracle_rg:server_release_lpar_resources[733] cuod_cpu_resource=0
+ha_oracle_rg:server_release_lpar_resources[733] typeset cuod_cpu_resource
+ha_oracle_rg:server_release_lpar_resources[735] display_event_summary=false
+ha_oracle_rg:server_release_lpar_resources[735] typeset display_event_summary
+ha_oracle_rg:server_release_lpar_resources[737] lmb_size=0
+ha_oracle_rg:server_release_lpar_resources[737] typeset lmb_size
+ha_oracle_rg:server_release_lpar_resources[739] typeset -i check_cuod
+ha_oracle_rg:server_release_lpar_resources[740] RC=0
+ha_oracle_rg:server_release_lpar_resources[740] typeset -i RC
+ha_oracle_rg:server_release_lpar_resources[744] : Look for any added application servers, beyond those running at the moment
+ha_oracle_rg:server_release_lpar_resources[746] getopts :g: opt
+ha_oracle_rg:server_release_lpar_resources[754] shift 0
+ha_oracle_rg:server_release_lpar_resources[756] APPLICATIONS=ha_app
+ha_oracle_rg:server_release_lpar_resources[759] : Set up values we are going to need to talk to the HMC, if they have not
+ha_oracle_rg:server_release_lpar_resources[760] : been set up before.
+ha_oracle_rg:server_release_lpar_resources[762] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[763] hostname
+ha_oracle_rg:server_release_lpar_resources[763] HOSTNAME=db02
+ha_oracle_rg:server_release_lpar_resources[766] [[ -z ha_db02 ]]
+ha_oracle_rg:server_release_lpar_resources[770] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[771] clodmget -q 'name = ha_db02 and object = MANAGED_SYSTEM' -f value -n HACMPnode
+ha_oracle_rg:server_release_lpar_resources[771] MANAGED_SYSTEM=''
+ha_oracle_rg:server_release_lpar_resources[774] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[775] clodmget -q name='ha_db02 and object=HMC_IP' -f value -n HACMPnode
+ha_oracle_rg:server_release_lpar_resources[775] HMC_IP=''
+ha_oracle_rg:server_release_lpar_resources[778] [[ -z '' ]]
+ha_oracle_rg:server_release_lpar_resources[780] : Node is not defined as an LPAR node if there is no HMC to talk to
+ha_oracle_rg:server_release_lpar_resources[782] exit 0
+ha_oracle_rg:node_down_local[259] : exit status of server_release_lpar_resources ha_app is: 0
+ha_oracle_rg:node_down_local[265] [[ -n '' ]]
+ha_oracle_rg:node_down_local[284] [[ -n '' ]]
+ha_oracle_rg:node_down_local[303] [[ -n '' ]]
+ha_oracle_rg:node_down_local[325] [[ -n '' ]]
+ha_oracle_rg:node_down_local[344] CROSSMOUNT=0
+ha_oracle_rg:node_down_local[344] typeset -i CROSSMOUNT
+ha_oracle_rg:node_down_local[345] export CROSSMOUNT
+ha_oracle_rg:node_down_local[347] [[ -n '' ]]
+ha_oracle_rg:node_down_local[387] (( 0 == 0 ))
+ha_oracle_rg:node_down_local[393] grep 'name ='
+ha_oracle_rg:node_down_local[393] sort
+ha_oracle_rg:node_down_local[393] odmget HACMPnode
+ha_oracle_rg:node_down_local[393] uniq
+ha_oracle_rg:node_down_local[393] wc -l
+ha_oracle_rg:node_down_local[393] (( 2 == 2 ))
+ha_oracle_rg:node_down_local[395] grep 'group ='
+ha_oracle_rg:node_down_local[395] cut -f2 '-d"'
+ha_oracle_rg:node_down_local[395] odmget HACMPgroup
+ha_oracle_rg:node_down_local[395] RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:node_down_local[400] grep 'value ='
+ha_oracle_rg:node_down_local[400] cut -f2 '-d"'
+ha_oracle_rg:node_down_local[399] odmget -q group='ha_oracle_rg AND name=EXPORT_FILESYSTEM' HACMPresource
+ha_oracle_rg:node_down_local[399] EXPORTLIST=''
+ha_oracle_rg:node_down_local[400] [[ -n '' ]]
+ha_oracle_rg:node_down_local[423] [[ false == true ]]
+ha_oracle_rg:node_down_local[432] [[ -n '' ]]
+ha_oracle_rg:node_down_local[443] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[444] clcallev release_vg_fs ALL datavg '' ''
Mar 25 19:22:03 EVENT START: release_vg_fs ALL datavg
+ha_oracle_rg:release_vg_fs[+64] [[ high = high ]]
+ha_oracle_rg:release_vg_fs[+64] version=1.4.1.55
+ha_oracle_rg:release_vg_fs[+66] STATUS=0
+ha_oracle_rg:release_vg_fs[+66] typeset -i DEF_VARYON_ACTION=0
+ha_oracle_rg:release_vg_fs[+70] [[ RELEASE != RELEASE ]]
+ha_oracle_rg:release_vg_fs[+76] FILE_SYSTEMS=ALL
+ha_oracle_rg:release_vg_fs[+77] VOLUME_GROUPS=datavg
+ha_oracle_rg:release_vg_fs[+78] OEM_FILE_SYSTEMS=
+ha_oracle_rg:release_vg_fs[+79] OEM_VOLUME_GROUPS=
+ha_oracle_rg:release_vg_fs[+80] VG_MOD=false
+ha_oracle_rg:release_vg_fs[+81] SELECTIVE_FAILOVER=false
+ha_oracle_rg:release_vg_fs[+81] typeset -i DEF_VARYOFF_ACTION=0
+ha_oracle_rg:release_vg_fs[+89] [[ ALL = ALL ]]
+ha_oracle_rg:release_vg_fs[+91] FILE_SYSTEMS=
+ha_oracle_rg:release_vg_fs[+91] [[ -z datavg ]]
+ha_oracle_rg:release_vg_fs[+91] [[ -n datavg ]]
+ha_oracle_rg:release_vg_fs[+103] +ha_oracle_rg:release_vg_fs[+103] rdsort datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] echo datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] sed -e s/\ /\
/g
+ha_oracle_rg:release_vg_fs[rdsort+5] sort -ru
VOLUME_GROUPS=datavg
+ha_oracle_rg:release_vg_fs[+103] [[ true = true ]]
+ha_oracle_rg:release_vg_fs[+103] [[ ERROR != ONLINE ]]
+ha_oracle_rg:release_vg_fs[+111] export SELECTIVE_FAILOVER=true
+ha_oracle_rg:release_vg_fs[+113] date
Mon Mar 25 19:22:03 BEIST 2013
+ha_oracle_rg:release_vg_fs[+115] lsvg -L -o
+ha_oracle_rg:release_vg_fs[+115] grep -x datavg
+ha_oracle_rg:release_vg_fs[+130] date
Mon Mar 25 19:22:03 BEIST 2013
+ha_oracle_rg:release_vg_fs[+130] [[ false = true ]]
+ha_oracle_rg:release_vg_fs[+142] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+204] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+219] cl_unexport_fs
+ha_oracle_rg:cl_unexport_fs[154] [[ high == high ]]
+ha_oracle_rg:cl_unexport_fs[154] version=1.4
+ha_oracle_rg:cl_unexport_fs[156] UNEXPORT_V3=''
+ha_oracle_rg:cl_unexport_fs[157] UNEXPORT_V4=''
+ha_oracle_rg:cl_unexport_fs[159] STATUS=0
+ha_oracle_rg:cl_unexport_fs[160] cl_get_path
+ha_oracle_rg:cl_unexport_fs[160] HA_DIR=es
+ha_oracle_rg:cl_unexport_fs[162] PROC_RES=false
+ha_oracle_rg:cl_unexport_fs[166] [[ 0 != 0 ]]
+ha_oracle_rg:cl_unexport_fs[170] EMULATE=REAL
+ha_oracle_rg:cl_unexport_fs[172] set -u
+ha_oracle_rg:cl_unexport_fs[174] (( 2 != 2 ))
+ha_oracle_rg:cl_unexport_fs[180] [[ __AIX__ == __AIX__ ]]
+ha_oracle_rg:cl_unexport_fs[182] cut -c1-2
+ha_oracle_rg:cl_unexport_fs[182] oslevel -r
+ha_oracle_rg:cl_unexport_fs[182] (( 61 > 52 ))
+ha_oracle_rg:cl_unexport_fs[184] FORCE=-F
+ha_oracle_rg:cl_unexport_fs[198] EXPFILE=/usr/es/sbin/cluster/etc/exports
+ha_oracle_rg:cl_unexport_fs[199] DARE_EVENT=reconfig_resource_release
+ha_oracle_rg:cl_unexport_fs[202] unexport_v4=''
+ha_oracle_rg:cl_unexport_fs[203] [[ -z '' ]]
+ha_oracle_rg:cl_unexport_fs[203] [[ release_vg_fs == reconfig_resource_release ]]
+ha_oracle_rg:cl_unexport_fs[214] [[ -z '' ]]
+ha_oracle_rg:cl_unexport_fs[214] [[ -r /usr/es/sbin/cluster/etc/exports ]]
+ha_oracle_rg:cl_unexport_fs[263] hasrv=''
+ha_oracle_rg:cl_unexport_fs[265] [[ -z '' ]]
+ha_oracle_rg:cl_unexport_fs[267] query=name='STABLE_STORAGE_PATH AND group=ha_oracle_rg'
+ha_oracle_rg:cl_unexport_fs[270] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+ha_oracle_rg:cl_unexport_fs[269] odmget -q name='STABLE_STORAGE_PATH AND group=ha_oracle_rg' HACMPresource
+ha_oracle_rg:cl_unexport_fs[269] STABLE_STORAGE_PATH=''
+ha_oracle_rg:cl_unexport_fs[272] [[ -z '' ]]
+ha_oracle_rg:cl_unexport_fs[274] STABLE_STORAGE_PATH=/var/adm/nfsv4.hacmp/ha_oracle_rg
+ha_oracle_rg:cl_unexport_fs[277] [[ -z ser ]]
+ha_oracle_rg:cl_unexport_fs[284] ps -ef
+ha_oracle_rg:cl_unexport_fs[284] grep -w nfsd
+ha_oracle_rg:cl_unexport_fs[284] grep -qw -- '-gp on'
+ha_oracle_rg:cl_unexport_fs[288] gp=off
+ha_oracle_rg:cl_unexport_fs[291] /usr/sbin/bootinfo -K
+ha_oracle_rg:cl_unexport_fs[291] KERNEL_BITS=64
+ha_oracle_rg:cl_unexport_fs[293] [[ off == on ]]
+ha_oracle_rg:cl_unexport_fs[298] NFSv4_REGISTERED=0
+ha_oracle_rg:cl_unexport_fs[302] V3=:2:3
+ha_oracle_rg:cl_unexport_fs[303] V4=:4
+ha_oracle_rg:cl_unexport_fs[305] [[ release_vg_fs != reconfig_resource_release ]]
+ha_oracle_rg:cl_unexport_fs[306] [[ release_vg_fs != release_vg_fs ]]
+ha_oracle_rg:cl_unexport_fs[342] ALLEXPORTS=All_exports
+ha_oracle_rg:cl_unexport_fs[344] cl_RMupdate resource_releasing All_exports cl_unexport_fs
2013-03-25T19:22:04.977633
2013-03-25T19:22:04.991516
Reference string: Mon.Mar.25.19:22:04.BEIST.2013.cl_unexport_fs.All_exports.ha_oracle_rg.ref
+ha_oracle_rg:cl_unexport_fs[346] echo
+ha_oracle_rg:cl_unexport_fs[346] tr ' ' '\n'
+ha_oracle_rg:cl_unexport_fs[346] sort -u
+ha_oracle_rg:cl_unexport_fs[346] FILESYSTEM_LIST=''
+ha_oracle_rg:cl_unexport_fs[458] [[ -n '' ]]
+ha_oracle_rg:cl_unexport_fs[486] ALLNOERREXPORT=All_nonerror_exports
+ha_oracle_rg:cl_unexport_fs[488] cl_RMupdate resource_down All_nonerror_exports cl_unexport_fs
2013-03-25T19:22:05.117268
2013-03-25T19:22:05.131032
Reference string: Mon.Mar.25.19:22:05.BEIST.2013.cl_unexport_fs.All_nonerror_exports.ha_oracle_rg.ref
+ha_oracle_rg:cl_unexport_fs[490] exit 0
+ha_oracle_rg:release_vg_fs[+222] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+240] +ha_oracle_rg:release_vg_fs[+240] cl_rrmethods2call prevg_offline
+ha_oracle_rg:cl_rrmethods2call[+49] [[ high = high ]]
+ha_oracle_rg:cl_rrmethods2call[+49] version=1.17
+ha_oracle_rg:cl_rrmethods2call[+50] +ha_oracle_rg:cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_rrmethods2call[+76] RRMETHODS=
+ha_oracle_rg:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
+ha_oracle_rg:cl_rrmethods2call[+114] [[ no = yes ]]
+ha_oracle_rg:cl_rrmethods2call[+120] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+125] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+130] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+135] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+140] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+145] echo
+ha_oracle_rg:cl_rrmethods2call[+146] exit 0
METHODS=
+ha_oracle_rg:release_vg_fs[+241] SKIPVARYOFF=0
+ha_oracle_rg:release_vg_fs[+265] [ RELEASE = RELEASE ]
+ha_oracle_rg:release_vg_fs[+270] SKIPVARYOFF=0
+ha_oracle_rg:release_vg_fs[+272] (( 0 == 1 ))
+ha_oracle_rg:release_vg_fs[+288] [[ -n datavg ]]
+ha_oracle_rg:release_vg_fs[+291] +ha_oracle_rg:release_vg_fs[+291] rdsort datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] echo datavg
+ha_oracle_rg:release_vg_fs[rdsort+4] sed -e s/\ /\
/g
+ha_oracle_rg:release_vg_fs[rdsort+5] sort -ru
VOLUME_GROUPS=datavg
+ha_oracle_rg:release_vg_fs[+291] [[ 0 -eq 0 ]]
+ha_oracle_rg:release_vg_fs[+295] cl_deactivate_vgs datavg
+ha_oracle_rg:cl_deactivate_vgs[440] [[ high == high ]]
+ha_oracle_rg:cl_deactivate_vgs[440] version=1.1.11.2
+ha_oracle_rg:cl_deactivate_vgs[441] cl_get_path
+ha_oracle_rg:cl_deactivate_vgs[441] HA_DIR=es
+ha_oracle_rg:cl_deactivate_vgs[443] STATUS=0
+ha_oracle_rg:cl_deactivate_vgs[443] typeset -i STATUS
+ha_oracle_rg:cl_deactivate_vgs[444] TMP_VARYOFF_STATUS=/tmp/_deactivate_vgs.tmp
+ha_oracle_rg:cl_deactivate_vgs[445] sddsrv_off=FALSE
+ha_oracle_rg:cl_deactivate_vgs[446] ALLVGS=All_volume_groups
+ha_oracle_rg:cl_deactivate_vgs[447] OEM_CALL=false
+ha_oracle_rg:cl_deactivate_vgs[449] (( 1 != 0 ))
+ha_oracle_rg:cl_deactivate_vgs[449] [[ datavg == -c ]]
+ha_oracle_rg:cl_deactivate_vgs[458] EVENT_TYPE=not_set
+ha_oracle_rg:cl_deactivate_vgs[459] EVENT_TYPE=not_set
+ha_oracle_rg:cl_deactivate_vgs[462] : if JOB_TYPE is set, and it does not equal to GROUP, then
+ha_oracle_rg:cl_deactivate_vgs[463] : we are processing for process_resources
+ha_oracle_rg:cl_deactivate_vgs[465] [[ 0 != 0 ]]
+ha_oracle_rg:cl_deactivate_vgs[469] : Otherwise, check for valid call
+ha_oracle_rg:cl_deactivate_vgs[471] PROC_RES=false
+ha_oracle_rg:cl_deactivate_vgs[472] (( 1 == 0 ))
+ha_oracle_rg:cl_deactivate_vgs[480] : set -u will report an error if any variable used in the script is not set
+ha_oracle_rg:cl_deactivate_vgs[482] set -u
+ha_oracle_rg:cl_deactivate_vgs[485] : Remove the status file if it currently exists
+ha_oracle_rg:cl_deactivate_vgs[487] rm -f /tmp/_deactivate_vgs.tmp
+ha_oracle_rg:cl_deactivate_vgs[490] : Each of the V, R, M and F fields are padded to fixed length,
+ha_oracle_rg:cl_deactivate_vgs[491] : to allow reliable comparisons. E.g., maximum VRMF is
+ha_oracle_rg:cl_deactivate_vgs[492] : 99.99.999.999
+ha_oracle_rg:cl_deactivate_vgs[494] typeset -i V R M F
+ha_oracle_rg:cl_deactivate_vgs[495] typeset -Z2 V
+ha_oracle_rg:cl_deactivate_vgs[496] typeset -Z2 R
+ha_oracle_rg:cl_deactivate_vgs[497] typeset -Z3 M
+ha_oracle_rg:cl_deactivate_vgs[498] typeset -Z3 F
+ha_oracle_rg:cl_deactivate_vgs[499] VRMF=0
+ha_oracle_rg:cl_deactivate_vgs[499] typeset -i VRMF
+ha_oracle_rg:cl_deactivate_vgs[502] : If the sddsrv daemon is running - vpath dead path detection and
+ha_oracle_rg:cl_deactivate_vgs[503] : recovery - turn it off, since interactions with the fibre channel
+ha_oracle_rg:cl_deactivate_vgs[504] : device driver will, in the case where there actually is a dead path,
+ha_oracle_rg:cl_deactivate_vgs[505] : slow down every vpath operation.
+ha_oracle_rg:cl_deactivate_vgs[507] ls '/dev/vpath*'
+ha_oracle_rg:cl_deactivate_vgs[507] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_deactivate_vgs[569] : Setup for the hats_adapter calls
+ha_oracle_rg:cl_deactivate_vgs[571] cldomain
+ha_oracle_rg:cl_deactivate_vgs[571] HA_DOMAIN_NAME=ha_oracle
+ha_oracle_rg:cl_deactivate_vgs[571] export HA_DOMAIN_NAME
+ha_oracle_rg:cl_deactivate_vgs[572] HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+ha_oracle_rg:cl_deactivate_vgs[572] export HB_SERVER_SOCKET
+ha_oracle_rg:cl_deactivate_vgs[575] : Special processing is required for 2 node clusters. determine the number
+ha_oracle_rg:cl_deactivate_vgs[576] : of nodes and AIX level
+ha_oracle_rg:cl_deactivate_vgs[578] TWO_NODE_CLUSTER=FALSE
+ha_oracle_rg:cl_deactivate_vgs[578] export TWO_NODE_CLUSTER
+ha_oracle_rg:cl_deactivate_vgs[579] FS_TYPES=''
+ha_oracle_rg:cl_deactivate_vgs[579] export FS_TYPES
+ha_oracle_rg:cl_deactivate_vgs[580] grep 'name ='
+ha_oracle_rg:cl_deactivate_vgs[580] sort
+ha_oracle_rg:cl_deactivate_vgs[580] odmget HACMPnode
+ha_oracle_rg:cl_deactivate_vgs[580] uniq
+ha_oracle_rg:cl_deactivate_vgs[580] wc -l
+ha_oracle_rg:cl_deactivate_vgs[580] (( 2 == 2 ))
+ha_oracle_rg:cl_deactivate_vgs[581] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_vgs[621] : Pick up a list of currently varyd on volume groups
+ha_oracle_rg:cl_deactivate_vgs[623] lsvg -L -o
+ha_oracle_rg:cl_deactivate_vgs[623] 2> /tmp/lsvg.err
+ha_oracle_rg:cl_deactivate_vgs[623] VG_ON_LIST=rootvg
+ha_oracle_rg:cl_deactivate_vgs[626] : if we are not called from process_resources, we have the old-style
+ha_oracle_rg:cl_deactivate_vgs[627] : environment and parameters
+ha_oracle_rg:cl_deactivate_vgs[629] [[ false == false ]]
+ha_oracle_rg:cl_deactivate_vgs[631] : Update the Resource Manager - let it know that were varying off these
+ha_oracle_rg:cl_deactivate_vgs[632] : volume groups
+ha_oracle_rg:cl_deactivate_vgs[634] cl_RMupdate resource_releasing All_volume_groups cl_deactivate_vgs
2013-03-25T19:22:05.410391
2013-03-25T19:22:05.424400
Reference string: Mon.Mar.25.19:22:05.BEIST.2013.cl_deactivate_vgs.All_volume_groups.ha_oracle_rg.ref
+ha_oracle_rg:cl_deactivate_vgs[637] : First process any mndhb for these volume groups
+ha_oracle_rg:cl_deactivate_vgs[639] vgs_process_mndhb datavg
+ha_oracle_rg:cl_deactivate_vgs[65] [[ high == high ]]
+ha_oracle_rg:cl_deactivate_vgs[65] set -x
+ha_oracle_rg:cl_deactivate_vgs[67] VG_LIST=datavg
+ha_oracle_rg:cl_deactivate_vgs[67] typeset VG_LIST
+ha_oracle_rg:cl_deactivate_vgs[68] typeset lv_list
+ha_oracle_rg:cl_deactivate_vgs[69] typeset lv_base
+ha_oracle_rg:cl_deactivate_vgs[71] STATUS=0
+ha_oracle_rg:cl_deactivate_vgs[71] typeset -i STATUS
+ha_oracle_rg:cl_deactivate_vgs[72] RC=0
+ha_oracle_rg:cl_deactivate_vgs[72] typeset -i RC
+ha_oracle_rg:cl_deactivate_vgs[73] rc_hats_adapter=0
+ha_oracle_rg:cl_deactivate_vgs[73] typeset -i rc_hats_adapter
+ha_oracle_rg:cl_deactivate_vgs[78] : If this vg contains lvs that are part of a mndhb network, tell
+ha_oracle_rg:cl_deactivate_vgs[79] : topsvcs to stop monitoring the network.
+ha_oracle_rg:cl_deactivate_vgs[80] : Note that we use clrsctinfo/cllsif because it will do the raw device
+ha_oracle_rg:cl_deactivate_vgs[81] : name mapping for us.
+ha_oracle_rg:cl_deactivate_vgs[83] grep :datavg:
+ha_oracle_rg:cl_deactivate_vgs[83] cut -f 7 -d:
+ha_oracle_rg:cl_deactivate_vgs[83] clrsctinfo -p cllsif -c
+ha_oracle_rg:cl_deactivate_vgs[83] sort -u
+ha_oracle_rg:cl_deactivate_vgs[83] lv_list=''
+ha_oracle_rg:cl_deactivate_vgs[109] : if there were any calls to hats_adapter give topsvcs a bit to catch up
+ha_oracle_rg:cl_deactivate_vgs[111] [[ -n '' ]]
+ha_oracle_rg:cl_deactivate_vgs[112] return 0
+ha_oracle_rg:cl_deactivate_vgs[641] PS4_LOOP=''
+ha_oracle_rg:cl_deactivate_vgs[641] typeset PS4_LOOP
+ha_oracle_rg:cl_deactivate_vgs[643] : Now, process the list of volume groups passed in
+ha_oracle_rg:cl_deactivate_vgs:datavg[647] PS4_LOOP=datavg
+ha_oracle_rg:cl_deactivate_vgs:datavg[649] : Find out if it is varied on
+ha_oracle_rg:cl_deactivate_vgs:datavg[651] [[ false == false ]]
+ha_oracle_rg:cl_deactivate_vgs:datavg[654] : Dealing with AIX LVM volume groups
+ha_oracle_rg:cl_deactivate_vgs:datavg[656] print rootvg
+ha_oracle_rg:cl_deactivate_vgs:datavg[656] grep -qw datavg
+ha_oracle_rg:cl_deactivate_vgs:datavg[659] : This one is not varyd on - skip it
+ha_oracle_rg:cl_deactivate_vgs:datavg[661] cl_echo 30 'cl_deactivate_vgs: Volume group datavg already varied off.\n' cl_deactivate_vgs datavg
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 25 2013 19:22:05
Mar 25 2013 19:22:05 +ha_oracle_rg:cl_echo[+96] MSG_ID=30
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 30 cl_deactivate_vgs: Volume group datavg already varied off.\n cl_deactivate_vgs datavg
+ha_oracle_rg:cl_echo[+98] 1>& 2
cl_deactivate_vgs: Volume group datavg already varied off.
+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1
+ha_oracle_rg:cl_echo[+105] return 0
+ha_oracle_rg:cl_deactivate_vgs:datavg[688] unset PS4_LOOP
+ha_oracle_rg:cl_deactivate_vgs[778] : Wait to sync all the background instances of vgs_varyoff
+ha_oracle_rg:cl_deactivate_vgs[780] wait
+ha_oracle_rg:cl_deactivate_vgs[783] : Collect any failure indications from backgrounded varyoff processing
+ha_oracle_rg:cl_deactivate_vgs[785] [[ -f /tmp/_deactivate_vgs.tmp ]]
+ha_oracle_rg:cl_deactivate_vgs[808] : Update Resource Manager - tell it that all the ones that were not reported
+ha_oracle_rg:cl_deactivate_vgs[809] : as failures, worked.
+ha_oracle_rg:cl_deactivate_vgs[811] ALLNOERRVGS=All_nonerror_volume_groups
+ha_oracle_rg:cl_deactivate_vgs[812] [[ false == false ]]
+ha_oracle_rg:cl_deactivate_vgs[813] cl_RMupdate resource_down All_nonerror_volume_groups cl_deactivate_vgs
2013-03-25T19:22:05.642565
2013-03-25T19:22:05.656501
Reference string: Mon.Mar.25.19:22:05.BEIST.2013.cl_deactivate_vgs.All_nonerror_volume_groups.ha_oracle_rg.ref
+ha_oracle_rg:cl_deactivate_vgs[820] [[ FALSE == TRUE ]]
+ha_oracle_rg:cl_deactivate_vgs[828] exit 0
+ha_oracle_rg:release_vg_fs[+338] [[ -n ]]
+ha_oracle_rg:release_vg_fs[+352] +ha_oracle_rg:release_vg_fs[+352] cl_rrmethods2call postvg_offline
+ha_oracle_rg:cl_rrmethods2call[+49] [[ high = high ]]
+ha_oracle_rg:cl_rrmethods2call[+49] version=1.17
+ha_oracle_rg:cl_rrmethods2call[+50] +ha_oracle_rg:cl_rrmethods2call[+50] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_rrmethods2call[+76] RRMETHODS=
+ha_oracle_rg:cl_rrmethods2call[+77] NEED_RR_ENV_VARS=no
+ha_oracle_rg:cl_rrmethods2call[+114] [[ no = yes ]]
+ha_oracle_rg:cl_rrmethods2call[+120] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+125] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+130] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+135] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+140] [[ -n ]]
+ha_oracle_rg:cl_rrmethods2call[+145] echo
+ha_oracle_rg:cl_rrmethods2call[+146] exit 0
METHODS=
+ha_oracle_rg:release_vg_fs[+365] exit 0
Mar 25 19:22:05 EVENT COMPLETED: release_vg_fs ALL datavg 0
+ha_oracle_rg:node_down_local[453] [[ -n '' ]]
+ha_oracle_rg:node_down_local[475] [[ R == RELEASE ]]
+ha_oracle_rg:node_down_local[491] [[ false != true ]]
+ha_oracle_rg:node_down_local[493] release_addr
+ha_oracle_rg:node_down_local[8] [[ -n '' ]]
+ha_oracle_rg:node_down_local[22] [[ -n ser ]]
+ha_oracle_rg:node_down_local[24] [[ '' != TRUE ]]
+ha_oracle_rg:node_down_local[25] clcallev release_service_addr ser
Mar 25 19:22:05 EVENT START: release_service_addr ser
+ha_oracle_rg:release_service_addr[+120] [[ high = high ]]
+ha_oracle_rg:release_service_addr[+120] version=1.40
+ha_oracle_rg:release_service_addr[+121] +ha_oracle_rg:release_service_addr[+121] cl_get_path
HA_DIR=es
+ha_oracle_rg:release_service_addr[+122] +ha_oracle_rg:release_service_addr[+122] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:release_service_addr[+124] STATUS=0
+ha_oracle_rg:release_service_addr[+127] [ ! -n ]
+ha_oracle_rg:release_service_addr[+129] EMULATE=REAL
+ha_oracle_rg:release_service_addr[+132] PROC_RES=false
+ha_oracle_rg:release_service_addr[+136] [[ 0 != 0 ]]
+ha_oracle_rg:release_service_addr[+141] [ 1 -eq 0 ]
+ha_oracle_rg:release_service_addr[+146] export RESOURCE_GROUPS=ha_oracle_rg
+ha_oracle_rg:release_service_addr[+162] saveNSORDER=UNDEFINED
+ha_oracle_rg:release_service_addr[+163] NSORDER=local
+ha_oracle_rg:release_service_addr[+163] export NSORDER
+ha_oracle_rg:release_service_addr[+170] export GROUPNAME
+ha_oracle_rg:release_service_addr[+170] [[ false = true ]]
+ha_oracle_rg:release_service_addr[+180] SERVICELABELS=ser
+ha_oracle_rg:release_service_addr[+184] ALLSRVADDRS=All_service_addrs
+ha_oracle_rg:release_service_addr[+184] [[ REAL = EMUL ]]
+ha_oracle_rg:release_service_addr[+190] cl_RMupdate resource_releasing All_service_addrs release_service_addr
2013-03-25T19:22:05.929736
2013-03-25T19:22:05.944028
Reference string: Mon.Mar.25.19:22:05.BEIST.2013.release_service_addr.All_service_addrs.ha_oracle_rg.ref
+ha_oracle_rg:release_service_addr[+200] clgetif -a ser
+ha_oracle_rg:release_service_addr[+200] LC_ALL=C
en1
+ha_oracle_rg:release_service_addr[+201] return_code=0
+ha_oracle_rg:release_service_addr[+202] [ 0 -ne 0 ]
+ha_oracle_rg:release_service_addr[+229] +ha_oracle_rg:release_service_addr[+229] name_to_addr ser
textual_addr=192.168.4.13
+ha_oracle_rg:release_service_addr[+230] +ha_oracle_rg:release_service_addr[+230] clgetif -a 192.168.4.13
+ha_oracle_rg:release_service_addr[+230] LC_ALL=C
INTERFACE=en1
+ha_oracle_rg:release_service_addr[+231] [ en1 = ]
+ha_oracle_rg:release_service_addr[+258] +ha_oracle_rg:release_service_addr[+258] clgetif -n 192.168.4.13
+ha_oracle_rg:release_service_addr[+258] LC_ALL=C
NETMASK=255.255.255.0
+ha_oracle_rg:release_service_addr[+260] +ha_oracle_rg:release_service_addr[+260] cllsif -J ~
+ha_oracle_rg:release_service_addr[+260] grep -wF 192.168.4.13
+ha_oracle_rg:release_service_addr[+260] cut -d~ -f3
+ha_oracle_rg:release_service_addr[+260] sort -u
NETWORK=net_ether_01
+ha_oracle_rg:release_service_addr[+267] +ha_oracle_rg:release_service_addr[+267] cllsif -J ~ -Si ha_db02
+ha_oracle_rg:release_service_addr[+267] grep ~boot~
+ha_oracle_rg:release_service_addr[+267] cut -d~ -f3,7
+ha_oracle_rg:release_service_addr[+267] grep ^net_ether_01~
+ha_oracle_rg:release_service_addr[+267] cut -d~ -f2
+ha_oracle_rg:release_service_addr[+267] tail -1
BOOT=192.168.2.12
+ha_oracle_rg:release_service_addr[+269] [ -z 192.168.2.12 ]
+ha_oracle_rg:release_service_addr[+299] SNA_LAN_LINKS=
+ha_oracle_rg:release_service_addr[+308] SNA_CONNECTIONS=
+ha_oracle_rg:release_service_addr[+308] [[ -n ]]
+ha_oracle_rg:release_service_addr[+323] [ -n en1 ]
+ha_oracle_rg:release_service_addr[+325] [ REAL = EMUL ]
+ha_oracle_rg:release_service_addr[+330] +ha_oracle_rg:release_service_addr[+330] get_inet_family 192.168.4.13
+ha_oracle_rg:release_service_addr[get_inet_family+3] ip_label=192.168.4.13
+ha_oracle_rg:release_service_addr[get_inet_family+4] +ha_oracle_rg:release_service_addr[get_inet_family+4] cllsif -J ~ -Sn 192.168.4.13
+ha_oracle_rg:release_service_addr[get_inet_family+4] awk -F~ {print $15}
inet_family=AF_INET
+ha_oracle_rg:release_service_addr[get_inet_family+7] echo inet
+ha_oracle_rg:release_service_addr[get_inet_family+8] return
INET_FAMILY=inet
+ha_oracle_rg:release_service_addr[+330] [[ inet = inet6 ]]
+ha_oracle_rg:release_service_addr[+336] cl_swap_IP_address rotating release en1 192.168.2.12 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_swap_IP_address[+1115] [[ high = high ]]
+ha_oracle_rg:cl_swap_IP_address[+1115] version=1.9.1.110
+ha_oracle_rg:cl_swap_IP_address[+1116] +ha_oracle_rg:cl_swap_IP_address[+1116] cl_get_path
HA_DIR=es
+ha_oracle_rg:cl_swap_IP_address[+1117] +ha_oracle_rg:cl_swap_IP_address[+1117] cl_get_path -S
OP_SEP=~
+ha_oracle_rg:cl_swap_IP_address[+1118] export LC_ALL=C
+ha_oracle_rg:cl_swap_IP_address[+1119] RESTORE_ROUTES=/usr/es/sbin/cluster/.restore_routes
+ha_oracle_rg:cl_swap_IP_address[+1123] cldomain
+ha_oracle_rg:cl_swap_IP_address[+1123] export HA_DOMAIN_NAME=ha_oracle
+ha_oracle_rg:cl_swap_IP_address[+1124] export HB_SERVER_SOCKET=/var/ha/soc/topsvcs/server_socket
+ha_oracle_rg:cl_swap_IP_address[+1125] BINDIR=/usr/sbin/rsct/bin
+ha_oracle_rg:cl_swap_IP_address[+1128] +ha_oracle_rg:cl_swap_IP_address[+1128] clmixver
MIXVER=11
+ha_oracle_rg:cl_swap_IP_address[+1129] MIXVERRC=0
+ha_oracle_rg:cl_swap_IP_address[+1131] cl_echo 33 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.12 192.168.4.13 255.255.255.0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating release en1 192.168.2.12 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_echo[+35] version=1.16
+ha_oracle_rg:cl_echo[+84] HACMP_OUT_FILE=/var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+89] set +u
+ha_oracle_rg:cl_echo[+89] [[ -n ]]
+ha_oracle_rg:cl_echo[+92] set -u
+ha_oracle_rg:cl_echo[+95] print -n -u2 Mar 25 2013 19:22:06
Mar 25 2013 19:22:06 +ha_oracle_rg:cl_echo[+96] MSG_ID=33
+ha_oracle_rg:cl_echo[+97] shift
+ha_oracle_rg:cl_echo[+98] dspmsg scripts.cat 33 Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.12 192.168.4.13 255.255.255.0 /usr/es/sbin/cluster/events/utils/cl_swap_IP_address rotating release en1 192.168.2.12 192.168.4.13 255.255.255.0
+ha_oracle_rg:cl_echo[+98] 1>& 2
Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en1 192.168.2.12 192.168.4.13 255.255.255.0+ha_oracle_rg:cl_echo[+101] clsynclog /var/hacmp/log/hacmp.out
+ha_oracle_rg:cl_echo[+101] 1> /dev/null 2>& 1


该贴由system转至本版2014-9-11 21:41:27

该贴由system转至本版2014-9-11 23:42:55




赞(0)    操作        顶端 
总帖数
1
每页帖数
101/1页1
返回列表
发新帖子
请输入验证码: 点击刷新验证码
您需要登录后才可以回帖 登录 | 注册
技术讨论