redhat5.8 11gR2 单节点RAC添加一个节点
1.准备工作:
现有环境是单节点11.2.0.3的RAC,节点是edison1,这一节我们添加节点edison2。
所有工作都是在RAC正常运行情况下进行的,不影响现在环境。
RAC的相关配置如下:
服务器主机名 edison1.oracle.com edison2.oracle.com
公共IP地址(eth0) 192.168.10.181 192.168.10.183
虚拟IP地址(eth0) 192.168.10.182 192.168.10.184
私有IP地址(eth1) 172.168.10.191 172.168.10.192
ORACLE RAC SID PROD1 PROD2
集群数据库名 prod
SCANIP 192.168.10.186
操作系统 Red Hat 5.8
存储 ASM
Oracle版本 11.2.0.3
1.1.raw和udev配置:
磁盘组规划:
systemdg
data
backupdg
[root@edison1 ~]# ls -ltr /dev/asm*
brw-rw---- 1 grid asmadmin 8, 80 Oct 11 23:06 /dev/asm-diskn
brw-rw---- 1 grid asmadmin 8, 96 Oct 11 23:06 /dev/asm-disko
brw-rw---- 1 grid asmadmin 8, 144 Oct 11 23:19 /dev/asm-diskr
brw-rw---- 1 grid asmadmin 8, 208 Oct 11 23:19 /dev/asm-diskv
brw-rw---- 1 grid asmadmin 8, 112 Oct 11 23:19 /dev/asm-diskp
brw-rw---- 1 grid asmadmin 8, 128 Oct 11 23:19 /dev/asm-diskq
brw-rw---- 1 grid asmadmin 8, 32 Oct 11 23:19 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 176 Oct 11 23:19 /dev/asm-diskt
brw-rw---- 1 grid asmadmin 8, 48 Oct 11 23:19 /dev/asm-diskl
brw-rw---- 1 grid asmadmin 8, 192 Oct 11 23:19 /dev/asm-disku
brw-rw---- 1 grid asmadmin 8, 160 Oct 11 23:19 /dev/asm-disks
brw-rw---- 1 grid asmadmin 8, 64 Oct 11 23:19 /dev/asm-diskm
[root@edison1 ~]#
1.2.添加组和用户(新节点):
--添加组和用户示例脚本:
groupadd -g 1000 oinstall
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba -d /home/grid -s /bin/bash grid
将grid用户添加到dba组示例:
gpasswd -a grid dba
--查看用户和组信息:
[root@edison1 ~]# id oracle
uid=501(oracle) gid=6000(oinstall) groups=6000(oinstall),5001(asmdba),6001(dba)
[root@edison1 ~]# id grid
uid=500(grid) gid=6000(oinstall) groups=6000(oinstall),5000(asmadmin),5001(asmdba),5002(asmoper)
[root@edison2 ~]# id oracle
uid=501(oracle) gid=6000(oinstall) groups=6000(oinstall),5001(asmdba),6001(dba)
[root@edison2 ~]# id grid
uid=500(grid) gid=6000(oinstall) groups=6000(oinstall),5000(asmadmin),5001(asmdba),5002(asmoper)
--修改密码:
passwd oracle
passwd grid
--在所有节点上验证nobody用户是否存在:
安装软件之前,执行以下过程,以验证各个节点上存在用户nobody,要确保该用户是否存在,输入以下命令:
[root@edison1 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
[root@edison2 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
如果该命令显示了nobody用户信息,则无需创建该用户。如果该用户不存在,则输入以下命令进行创建:
/usr/sbin/useradd nobody
1.3.禁用防火墙和SELNUX(新节点):
关闭防火墙:
service iptables status
service iptables stop
chkconfig iptbable off
chkconfig iptables --list
[root@edison2 ~]# service iptables status
Firewall is stopped.
[root@edison2 ~]# chkconfig iptables --list
iptables 0:off 1:off 2:on 3:on 4:on 5:on 6:off
设置/etc/selinux/config文件,将SELINUX设置为disabled。
[root@edison2 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
1.4.配置时间同步(新节点):
在11gR2中,RAC在安装的时候,时间同步可以用两种方式来实现:
NTP和CTSS。
当安装程序发现NTP协议处于非活动状态时,安装集群时间同步服务将以活动模式自动进行安装并同步所有节点的时间。如果发现配置
了NTP,则以观察者模式启动集群时间同步服务,Oracle Clusterware不会在集群中进行时间同步活动。
Oracle集群时间同步服务(ctssd)旨在为那些Oracle RAC数据库无法访问NTP服务的组织提供服务。
这里我们使用CTSS。
--配置CTSS:
使用集群时间同步服务在集群中提供同步服务,需要卸载网络时间协议(NTP)极其配置。
要停用NTP服务,必须停止当前的ntpd服务,从初始化序列中禁用该服务,并删除ntp.conf文件。
要在Linux上完成这些步骤,以root用户身份在两个Oracle RAC节点上运行以下命令:
/sbin/service ntpd stop
chkconfig ntpd off
[root@edison1 ~]# /sbin/service ntpd stop
Shutting down ntpd: [FAILED]
[root@edison1 ~]# chkconfig ntpd off
[root@edison1 ~]# ls /etc/ntp.conf
/etc/ntp.conf
[root@edison1 ~]# mv /etc/ntp.conf /etc/ntp.conf.original
[root@edison1 ~]# chkconfig ntpd --list
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@edison1 ~]#
[root@edison2 ~]# /sbin/service ntpd stop
Shutting down ntpd: [FAILED]
[root@edison2 ~]# chkconfig ntpd off
[root@edison2 ~]# ls /etc/ntp.conf
/etc/ntp.conf
[root@edison2 ~]# mv /etc/ntp.conf /etc/ntp.conf.original
[root@edison2 ~]# chkconfig ntpd --list
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@edison2 ~]#
还要删除以下文件:
rm /var/run/ntpd.pid
此文件保存了NTP后台程序的pid。
在安装后,要确认ctssd处于活动状态,用grid所有者输入以下命令:
[grid@edison1 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[grid@edison1 ~]$
1.5.创建目录结构(新节点):
先在现有节点查看grid和oracle用户的环境变量:
[grid@edison1 ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
CRS_HOME=/g01/grid/app/11.2.0/grid
ORACLE_BASE=/g01
ORACLE_SID=+ASM1
ORACLE_HOME=/g01/grid/app/11.2.0/grid
PATH=$CRS_HOME/bin:$PATH:$HOME/bin
export PATH CRS_HOME ORACLE_BASE ORACLE_SID ORACLE_HOME
[oracle@edison1 ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
ORACLE_BASE=/s01/oracle/app/oracle
ORACLE_HOME=/s01/oracle/app/oracle/product/11.2.0/dbhome_1
ORACLE_SID=PROD1
PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$PATH:$HOME/bin
export PATH ORACLE_BASE ORACLE_HOME ORACLE_SID
[oracle@edison1 ~]$
在新节点创建相同的目录结构:
mkdir -p /g01/grid/app/11.2.0/grid
chown -R grid:oinstall /g01
mkdir -p /s01/oracle/app/oracle
mkdir -p /s01/oracle/app/oracle/product/11.2.0/dbhome_1
chown -R oracle:oinstall /s01
1.6.配置环境变量(新节点):
grid用户,修改新节点grid用户环境变量与现有节点grid用户环境变量相同:
oracle用户,修改新节点oracle用户环境变量与现有节点oracle用户环境变量相同:
注意,asm和oracle实例名称要改一下,改成新节点的实例名。
添加后如下:
oracle:
ORACLE_BASE=/s01/oracle/app/oracle
ORACLE_HOME=/s01/oracle/app/oracle/product/11.2.0/dbhome_1
ORACLE_SID=PROD2
PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$PATH:$HOME/bin
export PATH ORACLE_BASE ORACLE_HOME ORACLE_SID
grid:
CRS_HOME=/g01/grid/app/11.2.0/grid
ORACLE_BASE=/g01
ORACLE_SID=+ASM2
ORACLE_HOME=/g01/grid/app/11.2.0/grid
PATH=$CRS_HOME/bin:$PATH:$HOME/bin
export PATH CRS_HOME ORACLE_BASE ORACLE_SID ORACLE_HOME
1.7.为安装用户设置资源限制(新节点):
1.7.1.修改/etc/security/limits.conf
要改善Linux系统上的软件性能,必须对Oracle软件所有者用户(grid,oracle)增加以下资源限制:
shell 限制 limits.conf 中的条目 硬限制
打开文件描述符最大数 nofile 65536
可用于单个用户的最大进程数 nproc 16384
进程堆栈的最大大小 stack 10240
以root用户身份,在每个Oracle Rac节点上,在/etc/security/limits.conf文件中添加如下内容,或者执行如下命令:
cat >>/etc/security/limits.conf<<EOF
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF
注意,红色部分一起执行。
1.7.2.修改/etc/pam.d/login
在每个Oracle RAC节点上,在/etc/pam.d/login文件中添加或编辑下面一行内容:
cat >>/etc/pam.d/login<<EOF
session required pam_limits.so
EOF
注意,红色部分一起执行。
1.7.3.shell限制:
读默认的shell启动文件进行以下更改,以便更改所有的Oracle安装者的ulimit设置:
(1).对于Bourne、Bash或Korn shell,通过运行以下命令将以下行添加到/etc/profile文件:
cat >>/etc/profile<<EOF
if [ /$USER = "oracle" ] || [ /$USER = "grid" ];then
if [ /$SHELL = "/bin/ksh" ];then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
注:红色部分一起执行。
(2).对于C shell(csh 或 tcsh),通过运行以下命令将以下行添加到 /etc/csh.login 文件:
cat >>/etc/csh.login<<EOF
if( /$USER == "oracle" || /$USER == "grid" ) then
limit maxproc 16384
limit descriptors 65536
endfi
EOF
注:红色部分一起执行。
1.7.4.修改/etc/sysctl.conf:
RHEL/OEL 5 上的Oracle Database 11g第2版需要如下所示的内核参数设置。给出的值都是最小值,因此如果你的系统使用更大的
值,则不需要修改。
vi /etc/sysctl.conf
参考现有节点更改就可以了。
使修改的值生效:
sysctl -p
1.8.修改/etc/hosts文件(所有节点添加以下内容):
192.168.10.181 edison1 edison1.oracle.com
192.168.10.182 edison1-vip
192.168.10.183 edison2 edison2.oracle.com
192.168.10.184 edison2-vip
192.168.10.186 edison-cluster edison-cluster-scan
172.168.10.191 edison1-priv
172.168.10.192 edison2-priv
1.9.确认相关RPM包已经打上(新节点):
检查libcap类的包是否都打上:
安装常用软件:
安装Oracle必须软件:
具体参考图片。
1.10.配置oracle和grid用户等效性(所有节点):
可以使用runSSHSetup.sh脚本来配置互信,如下:
$ORACLE_HOME/oui/bin/runSSHSetup.sh -user oracle -hosts 'edison1 edison2' -advanced -exverify
也可以手动配置,参考linux配置互信。
配置完互信一定要先测试一下互信通不通,第一次需要输入密码。
1.11.备份OCR:
在添加删除节点之前,都要备份一下OCR。虽然OCR每隔四小时也会备份依稀。
--执行手工OCR的备份:
ocrconfig -manualbackup
--查看ocr的手工备份:
corconfig -showbackup manual
[root@edison1 ~]# ocrconfig -manualbackup
edison1 2014/10/12 02:07:30 /g01/grid/app/11.2.0/grid/cdata/edison-cluster/backup_20141012_020730.ocr
[root@edison1 ~]# ocrconfig -showbackup manual
edison1 2014/10/12 02:07:30 /g01/grid/app/11.2.0/grid/cdata/edison-cluster/backup_20141012_020730.ocr
2.对新节点安装clusterware:
这里的安装方法是在节点1上用GI用户运行$ORACLE_HOME/oui/bin/addNode.sh脚本来安装。在安装之前,我们需要进行一下校验工作。
在我们执行命令之前,要保证原有Cluster中所有节点上的CRS都正常运行,否则addNode时会报错。
2.1.验证硬件和操作系统配置已经完成:
验证过程报以下警告:
[grid@edison1 ~]$ cluvfy stage -post hwos -n edison2
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "edison1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.10.0" with node(s) edison2
TCP connectivity check passed for subnet "192.168.10.0"
Node connectivity passed for subnet "172.168.10.0" with node(s) edison2
TCP connectivity check passed for subnet "172.168.10.0"
Interfaces found on subnet "172.168.10.0" that are likely candidates for VIP are:
edison2 eth1:172.168.10.192
Interfaces found on subnet "192.168.10.0" that are likely candidates for a private interconnect are:
edison2 eth0:192.168.10.183
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "172.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed
Checking shared storage accessibility...
WARNING:
edison2:PRVF-7017 : Package cvuqdisk not installed
edison2
No shared storage found
Shared storage check failed on nodes "edison2"
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
[grid@edison1 ~]$
检查asm磁盘组权限都没有什么问题,先忽略吧:
[grid@edison1 ~]$ ls -lrt /dev/asm*
brw-rw---- 1 grid asmadmin 8, 80 Oct 12 02:45 /dev/asm-diskn
brw-rw---- 1 grid asmadmin 8, 96 Oct 12 02:45 /dev/asm-disko
brw-rw---- 1 grid asmadmin 8, 128 Oct 12 04:32 /dev/asm-diskq
brw-rw---- 1 grid asmadmin 8, 32 Oct 12 04:32 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 112 Oct 12 04:47 /dev/asm-diskp
brw-rw---- 1 grid asmadmin 8, 208 Oct 12 04:47 /dev/asm-diskv
brw-rw---- 1 grid asmadmin 8, 144 Oct 12 04:47 /dev/asm-diskr
brw-rw---- 1 grid asmadmin 8, 192 Oct 12 04:47 /dev/asm-disku
brw-rw---- 1 grid asmadmin 8, 160 Oct 12 04:47 /dev/asm-disks
brw-rw---- 1 grid asmadmin 8, 64 Oct 12 04:47 /dev/asm-diskm
brw-rw---- 1 grid asmadmin 8, 48 Oct 12 04:47 /dev/asm-diskl
brw-rw---- 1 grid asmadmin 8, 176 Oct 12 04:47 /dev/asm-diskt
[root@edison2 ~]# ls -ltr /dev/asm*
brw-rw---- 1 grid asmadmin 8, 32 Oct 12 02:46 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 112 Oct 12 02:46 /dev/asm-disko
brw-rw---- 1 grid asmadmin 8, 160 Oct 12 02:46 /dev/asm-disks
brw-rw---- 1 grid asmadmin 8, 48 Oct 12 02:46 /dev/asm-diskl
brw-rw---- 1 grid asmadmin 8, 64 Oct 12 02:46 /dev/asm-diskm
brw-rw---- 1 grid asmadmin 8, 144 Oct 12 02:46 /dev/asm-diskq
brw-rw---- 1 grid asmadmin 8, 192 Oct 12 02:46 /dev/asm-disku
brw-rw---- 1 grid asmadmin 8, 80 Oct 12 02:46 /dev/asm-diskn
brw-rw---- 1 grid asmadmin 8, 128 Oct 12 02:46 /dev/asm-diskp
brw-rw---- 1 grid asmadmin 8, 96 Oct 12 02:46 /dev/asm-diskr
brw-rw---- 1 grid asmadmin 8, 176 Oct 12 02:46 /dev/asm-diskt
brw-rw---- 1 grid asmadmin 8, 208 Oct 12 02:46 /dev/asm-diskv
2.2.在安装集群之前检查节点列表中的所有节点:
[grid@edison1 ~]$ cluvfy stage -pre crsinst -n edison2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "edison1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.10.0" with node(s) edison2
TCP connectivity check passed for subnet "192.168.10.0"
Node connectivity passed for subnet "172.168.10.0" with node(s) edison2
TCP connectivity check passed for subnet "172.168.10.0"
Interfaces found on subnet "172.168.10.0" that are likely candidates for VIP are:
edison2 eth1:172.168.10.192
Interfaces found on subnet "192.168.10.0" that are likely candidates for a private interconnect are:
edison2 eth0:192.168.10.183
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "172.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "edison2:/g01/grid/app/11.2.0/grid,edison2:/tmp"
Check for multiple users with UID value 500 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
edison2
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "ksh"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: edison2
File "/etc/resolv.conf" is not consistent across nodes
Time zone consistency check passed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@edison1 ~]$
因为没有配置DNS所以校验失败,这个可以先忽略。
2.3.GI用户执行addNode.sh命令:
在addNode.sh正式添加节点之前它也会调用cluvfy工具来验证新加入的节点是否满足条件,如果不满足则拒绝下一步操作。
addNode.sh脚本如下:
[grid@edison1 ~]$ cd $ORACLE_HOME/oui/bin
[grid@edison1 bin]$ ls
addLangs.sh attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh
addNode.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh
[grid@edison1 bin]$ cat addNode.sh
#!/bin/sh
OHOME=/g01/grid/app/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
$ADDNODE
EXIT_CODE=$?;
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
$CHECK_NODEADD
EXIT_CODE=$?;
if [ $EXIT_CODE -eq 0 ]
then
$ADDNODE
EXIT_CODE=$?;
fi
fi
exit $EXIT_CODE ;
[grid@edison1 bin]$
--直接运行addNode.sh脚本会提示我们指定参数,如下:
[grid@edison1 bin]$ ./addNode.sh
ERROR:
Value for CLUSTER_NEW_NODES not specified.
USAGE:
/g01/grid/app/11.2.0/grid/cv/cvutl/check_nodeadd.pl {-pre|-post} <options>
/g01/grid/app/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre CLUSTER_NEW_NODES={<comma-separated-node-list>}
/g01/grid/app/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre CLUSTER_NEW_NODES={<comma-separated-node-list>} CLUSTER_NEW_VIRTUAL_HOSTNAMES={<comma-separated-node-list>}
/g01/grid/app/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre -responseFile <response-file-name>
/g01/grid/app/11.2.0/grid/cv/cvutl/check_nodeadd.pl -post
[grid@edison1 bin]$
因为我们addNode.sh脚本会调用cluvfy工具来验证新加入的节点是否满足条件,而我们DNS没有配置,所以直接运行,肯定会报错。
所以在运行addNode.sh之前需要设置环境变量,跳过对节点新增的预检查。这个参数是从addNode.sh脚本里找出来的。
export IGNORE_PREADDNODE_CHECKS=Y
这里使用静默安装,在节点1上用grid用户执行如下命令:
[grid@edison1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@edison1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={edison2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={edison2-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMEs={edison2-priv}" > /home/grid/addNode.log 2>&1
[grid@edison1 bin]$
执行的过程中会一直挂起,tail -10f /home/grid/addNode.log查看执行日志。
添加节点日志中会提示执行一些脚本:
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/g01/grid/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'edison2'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/g01/grid/app/oraInventory/orainstRoot.sh #On nodes edison2
/g01/grid/app/11.2.0/grid/root.sh #On nodes edison2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /g01/grid/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
我们在节点2 root用户依次执行以下脚本:
/g01/grid/app/oraInventory/orainstRoot.sh
/g01/grid/app/11.2.0/grid/root.sh
[root@edison2 ~]# /g01/grid/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /g01/grid/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /g01/grid/app/oraInventory to oinstall.
The execution of the script is complete.
[root@edison2 ~]# /g01/grid/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /g01/grid/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /g01/grid/app/oraInventory to oinstall.
The execution of the script is complete.
[root@edison2 ~]# /g01/grid/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /g01/grid/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /g01/grid/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node edison1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@edison2 ~]#
添加节点日志如下:
[root@edison1 grid]# tail -10f addNode.log
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 5736 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes edison2 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /g01/grid/app/11.2.0/grid
New Nodes
Space Requirements
New Nodes
edison2
/: Required 4.23GB : Available 41.57GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.3.0
Sun JDK 1.5.0.30.03
Installer SDK Component 11.2.0.3.0
Oracle One-Off Patch Installer 11.2.0.1.7
Oracle Universal Installer 11.2.0.3.0
Oracle USM Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.4
Oracle DBCA Deconfiguration 11.2.0.3.0
Oracle RAC Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Server) 11.2.0.3.0
Installation Plugin Files 11.2.0.3.0
Universal Storage Manager Files 11.2.0.3.0
Oracle Text Required Support Files 11.2.0.3.0
Automatic Storage Management Assistant 11.2.0.3.0
Oracle Database 11g Multimedia Files 11.2.0.3.0
Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
Oracle Core Required Support Files 11.2.0.3.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Client) 11.2.0.3.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.3.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.3.0
Oracle JDBC/OCI Instant Client 11.2.0.3.0
Oracle Multimedia Client Option 11.2.0.3.0
LDAP Required Support Files 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.3.0
OLAP SQL Scripts 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.3.0
SQL*Plus Files for Instant Client 11.2.0.3.0
Oracle Net Required Support Files 11.2.0.3.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.3.0
RDBMS Required Support Files Runtime 11.2.0.3.0
XML Parser for Java 11.2.0.3.0
Oracle Security Developer Tools 11.2.0.3.0
Oracle Wallet Manager 11.2.0.3.0
Enterprise Manager plugin Common Files 11.2.0.3.0
Platform Required Support Files 11.2.0.3.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.3.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.3.0
Oracle Java Client 11.2.0.3.0
Cluster Verification Utility Files 11.2.0.3.0
Oracle Notification Service (eONS) 11.2.0.3.0
Oracle LDAP administration 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Oracle Clusterware RDBMS Files 11.2.0.3.0
Oracle Locale Builder 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Buildtools Common Files 11.2.0.3.0
Oracle RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus Required Support Files 11.2.0.3.0
XDK Required Support Files 11.2.0.3.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.3.0
Precompiler Required Support Files 11.2.0.3.0
Installation Common Files 11.2.0.3.0
Required Support Files 11.2.0.3.0
Oracle JDBC/THIN Interfaces 11.2.0.3.0
Oracle Multimedia Locator 11.2.0.3.0
Oracle Multimedia 11.2.0.3.0
HAS Common Files 11.2.0.3.0
Assistant Common Files 11.2.0.3.0
PL/SQL 11.2.0.3.0
HAS Files for DB 11.2.0.3.0
Oracle Recovery Manager 11.2.0.3.0
Oracle Database Utilities 11.2.0.3.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.3.0
Oracle Netca Client 11.2.0.3.0
Oracle Net 11.2.0.3.0
Oracle JVM 11.2.0.3.0
Oracle Internet Directory Client 11.2.0.3.0
Oracle Net Listener 11.2.0.3.0
Cluster Ready Services Files 11.2.0.3.0
Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Sunday, October 12, 2014 5:30:19 AM EDT)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Sunday, October 12, 2014 5:30:24 AM EDT)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Sunday, October 12, 2014 5:41:48 AM EDT)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/g01/grid/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'edison2'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/g01/grid/app/oraInventory/orainstRoot.sh #On nodes edison2
/g01/grid/app/11.2.0/grid/root.sh #On nodes edison2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /g01/grid/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
^C
[root@edison1 grid]#
该日志也会写入:/tmp/silentInstall.log中。
2.5.查看crs状态:
[grid@edison1 ~]$ ./crs_stat
Name Target State Host
------------------------------ ---------- --------- -------
ora.BACKUPDG.dg ONLINE ONLINE edison1
ora.DATA.dg ONLINE ONLINE edison1
ora.LISTENER.lsnr ONLINE ONLINE edison1
ora.LISTENER_SCAN1.lsnr ONLINE ONLINE edison1
ora.SYSTEMDG.dg ONLINE ONLINE edison1
ora.asm ONLINE ONLINE edison1
ora.cvu OFFLINE OFFLINE
ora.gsd OFFLINE OFFLINE
ora.edison1.ASM1.asm ONLINE ONLINE edison1
ora.edison1.LISTENER_edison1.lsnr ONLINE ONLINE edison1
ora.edison1.gsd OFFLINE OFFLINE
ora.edison1.ons ONLINE ONLINE edison1
ora.edison1.vip ONLINE ONLINE edison1
ora.edison2.ASM2.asm ONLINE ONLINE edison2
ora.edison2.LISTENER_edison2.lsnr ONLINE ONLINE edison2
ora.edison2.gsd OFFLINE OFFLINE
ora.edison2.ons ONLINE ONLINE edison2
ora.edison2.vip ONLINE ONLINE edison2
ora.net1.network ONLINE ONLINE edison1
ora.oc4j OFFLINE OFFLINE
ora.ons ONLINE ONLINE edison1
ora.prod.db ONLINE ONLINE edison1
ora.prod.edison_taf.svc ONLINE ONLINE edison1
ora.scan1.vip ONLINE ONLINE edison1
可以看出,部分资源已经跑在了节点2上。
3.对新节点安装Oracle软件:
3.1.执行addNode.sh命令:
和Grid用户一样,在节点1上用oracle用户运行$ORACLE_HOME/oui/bin/addNode.sh脚本来安装Oracle软件。在安装之前,我们需要
进行一下校验工作。
--查看addNode.sh脚本:
[oracle@edison1 bin]$ pwd
/s01/oracle/app/oracle/product/11.2.0/dbhome_1/oui/bin
[oracle@edison1 bin]$ cat addNode.sh
#!/bin/sh
OHOME=/s01/oracle/app/oracle/product/11.2.0/dbhome_1
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
$ADDNODE
EXIT_CODE=$?;
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
$CHECK_NODEADD
EXIT_CODE=$?;
if [ $EXIT_CODE -eq 0 ]
then
$ADDNODE
EXIT_CODE=$?;
fi
fi
exit $EXIT_CODE ;
[oracle@edison1 bin]$
在addNode.sh正式添加节点之前它也会调用cluvfy工具来验证新加入节点是否满足条件,如果不满足则拒绝下一步操作。
所以在运行addNode.sh之前需要设置环境变量,跳过对节点新增的预检查。这个参数就是addNode.sh脚本里找出来的:
export IGNORE_PREADDNODE_CHECKS=Y
使用静默安装,在节点1用oracle用户执行如下命令:
[oracle@edison1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@edison1 bin]$ ls
addLangs.sh attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh
addNode.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh
[oracle@edison1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@edison1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={edison2}">/home/oracle/addNode.log 2>&1
然后一直等待安装。期间可以查看日志。
[root@edison1 oracle]# tail -10f addNode.log
Enterprise Edition Options 11.2.0.3.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Sunday, October 12, 2014 6:12:21 AM EDT)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Sunday, October 12, 2014 6:12:26 AM EDT)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Sunday, October 12, 2014 6:29:38 AM EDT)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/s01/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh #On nodes edison2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /s01/oracle/app/oracle/product/11.2.0/dbhome_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
^C
[root@edison1 oracle]#
完成后提示root用户在节点2执行脚本:
/s01/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh
[root@edison2 ~]# /s01/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /s01/oracle/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@edison2 ~]#
安装oracle软件过程中日志如下:
[root@edison1 oracle]# cat addNode.log
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 5444 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes edison2 are available
............................................................... 100% Done.
..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /s01/oracle/app/oracle/product/11.2.0/dbhome_1
New Nodes
Space Requirements
New Nodes
edison2
/: Required 4.28GB : Available 38.40GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.3.0
Sun JDK 1.5.0.30.03
Installer SDK Component 11.2.0.3.0
Oracle One-Off Patch Installer 11.2.0.1.7
Oracle Universal Installer 11.2.0.3.0
Oracle USM Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Oracle DBCA Deconfiguration 11.2.0.3.0
Oracle RAC Deconfiguration 11.2.0.3.0
Oracle Database Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Client 10.3.2.1.0
Oracle Configuration Manager 10.3.5.0.1
Oracle ODBC Driverfor Instant Client 11.2.0.3.0
LDAP Required Support Files 11.2.0.3.0
SSL Required Support Files for InstantClient 11.2.0.3.0
Bali Share 1.1.18.0.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
Oracle Real Application Testing 11.2.0.3.0
Oracle Database Vault J2EE Application 11.2.0.3.0
Oracle Label Security 11.2.0.3.0
Oracle Data Mining RDBMS Files 11.2.0.3.0
Oracle OLAP RDBMS Files 11.2.0.3.0
Oracle OLAP API 11.2.0.3.0
Platform Required Support Files 11.2.0.3.0
Oracle Database Vault option 11.2.0.3.0
Oracle RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus Required Support Files 11.2.0.3.0
Oracle Display Fonts 9.0.2.0.0
Oracle Ice Browser 5.2.3.6.0
Oracle JDBC Server Support Package 11.2.0.3.0
Oracle SQL Developer 11.2.0.3.0
Oracle Application Express 11.2.0.3.0
XDK Required Support Files 11.2.0.3.0
RDBMS Required Support Files for Instant Client 11.2.0.3.0
SQLJ Runtime 11.2.0.3.0
Database Workspace Manager 11.2.0.3.0
RDBMS Required Support Files Runtime 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Exadata Storage Server 11.2.0.1.0
Provisioning Advisor Framework 10.2.0.4.3
Enterprise Manager Database Plugin -- Repository Support 11.2.0.3.0
Enterprise Manager Repository Core Files 10.2.0.4.4
Enterprise Manager Database Plugin -- Agent Support 11.2.0.3.0
Enterprise Manager Grid Control Core Files 10.2.0.4.4
Enterprise Manager Common Core Files 10.2.0.4.4
Enterprise Manager Agent Core Files 10.2.0.4.4
RDBMS Required Support Files 11.2.0.3.0
regexp 2.1.9.0.0
Agent Required Support Files 10.2.0.4.3
Oracle 11g Warehouse Builder Required Files 11.2.0.3.0
Oracle Notification Service (eONS) 11.2.0.3.0
Oracle Text Required Support Files 11.2.0.3.0
Parser Generator Required Support Files 11.2.0.3.0
Oracle Database 11g Multimedia Files 11.2.0.3.0
Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle Multimedia Annotator 11.2.0.3.0
Oracle JDBC/OCI Instant Client 11.2.0.3.0
Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
Precompiler Required Support Files 11.2.0.3.0
Oracle Core Required Support Files 11.2.0.3.0
Sample Schema Data 11.2.0.3.0
Oracle Starter Database 11.2.0.3.0
Oracle Message Gateway Common Files 11.2.0.3.0
Oracle XML Query 11.2.0.3.0
XML Parser for Oracle JVM 11.2.0.3.0
Oracle Help For Java 4.2.9.0.0
Installation Plugin Files 11.2.0.3.0
Enterprise Manager Common Files 10.2.0.4.3
Expat libraries 2.0.1.0.1
Deinstallation Tool 11.2.0.3.0
Oracle Quality of Service Management (Client) 11.2.0.3.0
Perl Modules 5.10.0.0.1
JAccelerator (COMPANION) 11.2.0.3.0
Oracle Containers for Java 11.2.0.3.0
Perl Interpreter 5.10.0.0.2
Oracle Net Required Support Files 11.2.0.3.0
Secure Socket Layer 11.2.0.3.0
Oracle Universal Connection Pool 11.2.0.3.0
Oracle JDBC/THIN Interfaces 11.2.0.3.0
Oracle Multimedia Client Option 11.2.0.3.0
Oracle Java Client 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Oracle Code Editor 1.2.1.0.0I
PL/SQL Embedded Gateway 11.2.0.3.0
OLAP SQL Scripts 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Oracle Locale Builder 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
SQL*Plus Files for Instant Client 11.2.0.3.0
Required Support Files 11.2.0.3.0
Oracle Database User Interface 2.2.13.0.0
Oracle ODBC Driver 11.2.0.3.0
Oracle Notification Service 11.2.0.3.0
XML Parser for Java 11.2.0.3.0
Oracle Security Developer Tools 11.2.0.3.0
Oracle Wallet Manager 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Oracle Clusterware RDBMS Files 11.2.0.3.0
Oracle UIX 2.2.24.6.0
Enterprise Manager plugin Common Files 11.2.0.3.0
HAS Common Files 11.2.0.3.0
Precompiler Common Files 11.2.0.3.0
Installation Common Files 11.2.0.3.0
Oracle Help for the Web 2.0.14.0.0
Oracle LDAP administration 11.2.0.3.0
Buildtools Common Files 11.2.0.3.0
Assistant Common Files 11.2.0.3.0
Oracle Recovery Manager 11.2.0.3.0
PL/SQL 11.2.0.3.0
Generic Connectivity Common Files 11.2.0.3.0
Oracle Database Gateway for ODBC 11.2.0.3.0
Oracle Programmer 11.2.0.3.0
Oracle Database Utilities 11.2.0.3.0
Enterprise Manager Agent 10.2.0.4.3
SQL*Plus 11.2.0.3.0
Oracle Netca Client 11.2.0.3.0
Oracle Multimedia Locator 11.2.0.3.0
Oracle Call Interface (OCI) 11.2.0.3.0
Oracle Multimedia 11.2.0.3.0
Oracle Net 11.2.0.3.0
Oracle XML Development Kit 11.2.0.3.0
Database Configuration and Upgrade Assistants 11.2.0.3.0
Oracle JVM 11.2.0.3.0
Oracle Advanced Security 11.2.0.3.0
Oracle Internet Directory Client 11.2.0.3.0
Oracle Enterprise Manager Console DB 11.2.0.3.0
HAS Files for DB 11.2.0.3.0
Oracle Net Listener 11.2.0.3.0
Oracle Text 11.2.0.3.0
Oracle Net Services 11.2.0.3.0
Oracle Database 11g 11.2.0.3.0
Oracle OLAP 11.2.0.3.0
Oracle Spatial 11.2.0.3.0
Oracle Partitioning 11.2.0.3.0
Enterprise Edition Options 11.2.0.3.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Sunday, October 12, 2014 6:12:21 AM EDT)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Sunday, October 12, 2014 6:12:26 AM EDT)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Sunday, October 12, 2014 6:29:38 AM EDT)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/s01/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh #On nodes edison2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /s01/oracle/app/oracle/product/11.2.0/dbhome_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
[root@edison1 oracle]#
3.3.查看oracle进程:
[root@edison2 ~]# ps -ef|grep ora
root 3573 3528 0 02:46 ? 00:00:07 hald-addon-storage: polling /dev/sr0
grid 22568 1 0 05:48 ? 00:00:05 /g01/grid/app/11.2.0/grid/bin/oraagent.bin
root 22734 1 0 05:48 ? 00:00:07 /g01/grid/app/11.2.0/grid/bin/orarootagent.bin
grid 22994 1 0 05:49 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 23032 1 0 05:49 ? 00:00:00 oracle+ASM2_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 23062 1 0 05:49 ? 00:00:00 oracle+ASM2_asmb_+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 23183 1 0 05:50 ? 00:00:03 /g01/grid/app/11.2.0/grid/bin/oraagent.bin
root 23190 1 0 05:50 ? 00:00:07 /g01/grid/app/11.2.0/grid/bin/orarootagent.bin
grid 23230 1 0 05:50 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 23239 1 0 05:50 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid 23249 1 0 05:50 ? 00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
root 25113 24897 0 06:43 pts/1 00:00:00 grep ora
[root@edison2 ~]#
4.对新节点用dbca添加新实例:
4.1.执行dbca命令:
在节点1上用dbca命令把oracle实例添加到数据库。
dbca->RAC->instance manager->add an instance->选择实例,输入sys用户和密码->学则节点和实例名->Finish。
也可以使用dbca静默安装:
[oracle@edison1 ~]$ dbca -silent -addInstance -nodeList edison2 -gdbName prod -instanceName PROD2 -sysDBAUserName sys -sysDBAPassword oracle
Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file "/s01/oracle/app/oracle/cfgtoollogs/dbca/PROD/PROD.log" for further details.
[oracle@edison1 ~]$
4.2.用oracle用户将实例添加到CRS资源里面:
注意:要用oracle用户执行,还要注意oracle用户的属组信息。
[oracle@edison1 ~]$ srvctl config database -d prod
Database unique name: PROD
Database name: PROD
Oracle home: /s01/oracle/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/PROD/spfilePROD.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: PROD
Database instances: PROD1,PROD2
Disk Groups: DATA,BACKUPDG
Mount point paths:
Services: edison_taf
Type: RAC
Database is administrator managed
[oracle@edison1 ~]$
可以看到实例PROD2已经和数据库关联了,无需再添加,如果没有关联上,就执行以下命令进行关联:
srvctl add instance -d prod -i PROD2 -n edison2
5.修改Client-side TAF配置:
5.1.修改Oracle用户下tnsnames.ora文件:
如果配置的是scanip名,无需修改,如果配置的是vip,要将新节点的vip也加入。
5.2.修改LOCAL_LISTENER和REMOTE_LISTENER参数:
执行如下操作:
alter system set LOCAL_LISTENER='NODE1_LOCAL' scope=both sid='PROD1';
alter system set LOCAL_LISTENER='NODE2_LOCAL' scope=both sid='PROD2';
alter system set REMOTE_LISTENER='edison_REMOTE' scope=both sid='*';
如果监听配置无误就无需修改。
6.修改Service-Side TAF配置:
[root@edison1 ~]# srvctl status service -d prod
Service edison_taf is running on instance(s) PROD1
[root@edison1 ~]#
[root@edison1 ~]# srvctl config service -d prod
Service name: edison_taf
Service is enabled
Server pool: prod_edison_taf
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 180
TAF failover delay: 5
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: PROD1
Available instances:
[root@edison1 ~]#
--修改之前的service,添加节点2实例:PROD2
[root@edison1 ~]# srvctl modify service -d prod -s edison_taf -n -i PROD2,PROD1
--验证:
[root@edison1 ~]# srvctl config service -d prod
Service name: edison_taf
Service is enabled
Server pool: prod_edison_taf
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 180
TAF failover delay: 5
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: PROD2,PROD1
Available instances:
[root@edison1 ~]#
[root@edison1 ~]# srvctl start service -d prod -s edison_taf -i PROD2
[root@edison1 ~]# srvctl status service -d prod
Service edison_taf is running on instance(s) PROD1,PROD2
[root@edison1 ~]#
7.验证:
[root@edison1 ~]# olsnodes -s
edison1 Active
edison2 Active
[root@edison1 ~]# olsnodes -n
edison1 1
edison2 2
[root@edison1 ~]#
[root@edison1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.BACKUPDG.dg
ONLINE ONLINE edison1
ONLINE ONLINE edison2
ora.DATA.dg
ONLINE ONLINE edison1
ONLINE ONLINE edison2
ora.LISTENER.lsnr
ONLINE ONLINE edison1
ONLINE ONLINE edison2
ora.SYSTEMDG.dg
ONLINE ONLINE edison1
ONLINE ONLINE edison2
ora.asm
ONLINE ONLINE edison1 Started
ONLINE ONLINE edison2 Started
ora.gsd
OFFLINE OFFLINE edison1
OFFLINE OFFLINE edison2
ora.net1.network
ONLINE ONLINE edison1
ONLINE ONLINE edison2
ora.ons
ONLINE ONLINE edison1
ONLINE ONLINE edison2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE edison1
ora.cvu
1 OFFLINE OFFLINE
ora.edison1.vip
1 ONLINE ONLINE edison1
ora.edison2.vip
1 ONLINE ONLINE edison2
ora.oc4j
1 OFFLINE OFFLINE
ora.prod.db
1 ONLINE ONLINE edison1 Open
2 ONLINE ONLINE edison2 Open
ora.prod.edison_taf.svc
1 ONLINE ONLINE edison1
2 ONLINE ONLINE edison2
ora.scan1.vip
1 ONLINE ONLINE edison1
[root@edison1 ~]#
SQL> col host_name for a20
SQL> select inst_id,host_name,instance_name,status from gv$instance;
INST_ID HOST_NAME INSTANCE_NAME STATUS
---------- -------------------- ---------------- ------------
1 edison1.oracle.com PROD1 OPEN
2 edison2.oracle.com PROD2 OPEN
SQL>
[grid@edison1 ~]$ ./crs_stat -t
Name Target State Host
------------------------------ ---------- --------- -------
ora.BACKUPDG.dg ONLINE ONLINE edison1
ora.DATA.dg ONLINE ONLINE edison1
ora.LISTENER.lsnr ONLINE ONLINE edison1
ora.LISTENER_SCAN1.lsnr ONLINE ONLINE edison1
ora.SYSTEMDG.dg ONLINE ONLINE edison1
ora.asm ONLINE ONLINE edison1
ora.cvu OFFLINE OFFLINE
ora.gsd OFFLINE OFFLINE
ora.edison1.ASM1.asm ONLINE ONLINE edison1
ora.edison1.LISTENER_edison1.lsnr ONLINE ONLINE edison1
ora.edison1.gsd OFFLINE OFFLINE
ora.edison1.ons ONLINE ONLINE edison1
ora.edison1.vip ONLINE ONLINE edison1
ora.edison2.ASM2.asm ONLINE ONLINE edison2
ora.edison2.LISTENER_edison2.lsnr ONLINE ONLINE edison2
ora.edison2.gsd OFFLINE OFFLINE
ora.edison2.ons ONLINE ONLINE edison2
ora.edison2.vip ONLINE ONLINE edison2
ora.net1.network ONLINE ONLINE edison1
ora.oc4j OFFLINE OFFLINE
ora.ons ONLINE ONLINE edison1
ora.prod.db ONLINE ONLINE edison1
ora.prod.edison_taf.svc ONLINE ONLINE edison1
ora.scan1.vip ONLINE ONLINE edison1
[grid@edison1 ~]$
8.总结:见下。
该贴由system转至本版2014-11-19 9:34:53