Oracle database 11g RAC:正常情况下的节点移除
在RAC的运维过程中,涉及到节点移除的操作通常是基于以下两种场景的:
1. 可预料的服务器更换
2 不可预料的节点故障,如:某个节点的软件故障或数据丢失。需要将节点从集群中踢出,以便重新部署或执行其他的策略。
本文描述在正常情况下的RAC节点移除操作(可预料的)
本文的技术呈现,基于已经装好的Oracle Database 11g RAC三节点环境。
该环境的当前运行状态如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
[grid@rac2 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1 ora.DATA01.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac2 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1 ora.ceph.db ora....se.type 0/2 0/1 ONLINE ONLINE rac1 ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE rac1 ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE ora....network ora....rk.type 1/5 0/ ONLINE ONLINE rac1 ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE rac1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip ora....t1.type 0/0 1/0 ONLINE ONLINE rac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2 ora....SM3.asm application 0/5 0/0 ONLINE ONLINE rac3 ora....C3.lsnr application 0/5 0/0 ONLINE ONLINE rac3 ora.rac3.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac3.ons application 0/3 0/0 ONLINE ONLINE rac3 ora.rac3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac3 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE rac1 ora.scan1.vip ora....ip.type 0/0 1/0 ONLINE ONLINE rac2 [grid@rac2 ~]$ |
如上所示,在我的环境中,该RAC具有三个节点。
在本文中,我会将其中的节点三(即:ra3)移除RAC。
————————————————————————————————-
1. Un-pin 节点三。
该操作需要以root身份登入需要保留的各个节点中的任意一个操作即可。
这里选择节点二操作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
[grid@rac2 ~]$ hostname rac2 [grid@rac2 ~]$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:38:39:78 inet addr:192.168.119.142 Bcast:192.168.119.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe38:3978/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8664399 errors:0 dropped:0 overruns:0 frame:0 TX packets:3550254 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:11971593849 (11.1 GiB) TX bytes:1099001416 (1.0 GiB) eth0:1 Link encap:Ethernet HWaddr 00:0C:29:38:39:78 inet addr:192.168.119.160 Bcast:192.168.119.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:3 Link encap:Ethernet HWaddr 00:0C:29:38:39:78 inet addr:192.168.119.152 Bcast:192.168.119.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:0C:29:38:39:82 inet addr:192.168.56.142 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe38:3982/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1274420 errors:0 dropped:0 overruns:0 frame:0 TX packets:1471669 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:714123515 (681.0 MiB) TX bytes:971987607 (926.9 MiB) eth1:1 Link encap:Ethernet HWaddr 00:0C:29:38:39:82 inet addr:169.254.241.68 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:255659 errors:0 dropped:0 overruns:0 frame:0 TX packets:255659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:117983558 (112.5 MiB) TX bytes:117983558 (112.5 MiB) [grid@rac2 ~]$ [grid@rac2 ~]$ whoami grid [grid@rac2 ~]$ |
检查节点状态:
1 2 3 4 5 |
[root@rac2 ~]# /u01/app/11.2.0/grid/bin/olsnodes -t -s rac1 Active Unpinned rac2 Active Unpinned rac3 Active Unpinned [root@rac2 ~]# |
如果节点三(rac3)的状态不是“unpinned”,需要作出修改:
Command is:/u01/app/11.2.0/grid/bin/crsctl unpin css -n rac3
Eg:
[root@rac2 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n rac3
CRS-4667: Node rac3 successfully unpinned.
[root@rac2 ~]#
2. 删除节点三上运行的数据库实例
在本环境中,节点三上运行的数据库实例为:ceph3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[root@rac3 ~]# ps -ef | grep pmon grid 1916 1 0 09:50 ? 00:00:00 asm_pmon_+ASM3 root 10742 6996 0 18:10 pts/1 00:00:00 grep pmon oracle 14177 1 0 10:20 ? 00:00:00 ora_pmon_ceph3 [root@rac3 ~]# [root@rac1 software]# su - oracle [oracle@rac1 ~]$ export ORACLE_SID=ceph1 [oracle@rac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 2 18:15:07 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> SQL> col instance for a10 SQL> select thread#,status,instance from v$thread; THREAD# STATUS INSTANCE ---------- ------ ---------- 1 OPEN ceph1 2 OPEN ceph2 3 OPEN ceph3 SQL> |
删除实例的操作在保留节点中的任意一个上操作即可(这里用节点二操作)
以oracle登录系统,通过DBCA操作。
方式一:
DBCA图形界面。
1 2 3 4 5 6 7 |
[root@rac2 ~]# su - oracle [oracle@rac2 ~]$ export DISPLAY=192.168.119.1:0.0 [oracle@rac2 ~]$ xhost + access control disabled, clients can connect from any host [oracle@rac2 ~]$ dbca & [1] 2328 [oracle@rac2 ~]$ |
如上截图所示,选择需要删除的服务名,填入sys与口令,下一步,即可。
然后,选择需要删除的实例名,下一步,便开始执行删除了。
方式二:DBCA命令行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[oracle@rac2 ~]$ dbca -silent -deleteInstance -nodeList rac3 -gdbName ceph -instanceName ceph3 -sysDBAUserName sys -sysDBAPassword "oracle" Deleting instance 1% complete 2% complete 6% complete 13% complete 20% complete 26% complete 33% complete 40% complete 46% complete 53% complete 60% complete 66% complete Completing instance management. 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ceph.log" for further details. [oracle@rac2 ~]$ |
删除成功后,确认数据库实例的状态:
SQL*Plus:
1 2 3 4 5 6 7 8 9 |
SQL> run 1* select thread#,status,instance from v$thread THREAD# STATUS INSTANCE ---------- ------ ---------- 1 OPEN ceph1 2 OPEN ceph2 SQL> |
Srvctl:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[grid@rac1 ~]$ srvctl config database -d ceph Database unique name: ceph Database name: ceph Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA01/ceph/spfileceph.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: ceph Database instances: ceph1,ceph2 Disk Groups: DATA01 Mount point paths: Services: Type: RAC Database is administrator managed [grid@rac1 ~]$ |
关注其中“Database instances”。
3. 停止节点三的监听
(在节点三上操作。)
当前集群件的状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
[root@rac3 ~]# su - grid [grid@rac3 ~]$ crs_stat -v NAME=ora.CRS.dg TYPE=ora.diskgroup.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.DATA01.dg TYPE=ora.diskgroup.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.LISTENER.lsnr TYPE=ora.listener.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.asm TYPE=ora.asm.type GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=+ASM1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=+ASM2 GEN_USR_ORA_INST_NAME@SERVERNAME(rac3)=+ASM3 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.ceph.db TYPE=ora.database.type GEN_START_OPTIONS@SERVERNAME(rac1)=open GEN_START_OPTIONS@SERVERNAME(rac2)=open GEN_START_OPTIONS@SERVERNAME(rac3)=open GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=ceph1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=ceph2 RESTART_ATTEMPTS=2 RESTART_COUNT=0 USR_ORA_INST_NAME@SERVERNAME(rac1)=ceph1 USR_ORA_INST_NAME@SERVERNAME(rac2)=ceph2 FAILURE_THRESHOLD=1 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.cvu TYPE=ora.cvu.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.gsd TYPE=ora.gsd.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=OFFLINE STATE=OFFLINE NAME=ora.net1.network TYPE=ora.network.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.oc4j TYPE=ora.oc4j.type RESTART_ATTEMPTS=1 RESTART_COUNT=0 FAILURE_THRESHOLD=2 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.ons TYPE=ora.ons.type RESTART_ATTEMPTS=3 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.ASM1.asm TYPE=application GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=+ASM1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=+ASM2 GEN_USR_ORA_INST_NAME@SERVERNAME(rac3)=+ASM3 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.LISTENER_RAC1.lsnr TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.gsd TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=OFFLINE STATE=OFFLINE NAME=ora.rac1.ons TYPE=application RESTART_ATTEMPTS=3 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.vip TYPE=ora.cluster_vip_net1.type RESTART_ATTEMPTS=0 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=1 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac2.ASM2.asm TYPE=application GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=+ASM1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=+ASM2 GEN_USR_ORA_INST_NAME@SERVERNAME(rac3)=+ASM3 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.rac2.LISTENER_RAC2.lsnr TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.rac2.gsd TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=OFFLINE STATE=OFFLINE NAME=ora.rac2.ons TYPE=application RESTART_ATTEMPTS=3 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.rac2.vip TYPE=ora.cluster_vip_net1.type RESTART_ATTEMPTS=0 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=1 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.rac3.ASM3.asm TYPE=application GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=+ASM1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=+ASM2 GEN_USR_ORA_INST_NAME@SERVERNAME(rac3)=+ASM3 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.LISTENER_RAC3.lsnr TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.gsd TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=OFFLINE STATE=OFFLINE NAME=ora.rac3.ons TYPE=application RESTART_ATTEMPTS=3 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.vip TYPE=ora.cluster_vip_net1.type RESTART_ATTEMPTS=0 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.registry.acfs TYPE=ora.registry.acfs.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.scan1.vip TYPE=ora.scan_vip.type RESTART_ATTEMPTS=0 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=1 TARGET=ONLINE STATE=ONLINE on rac2 [grid@rac3 ~]$ |
监听器状态:
1 2 3 4 5 6 7 |
[grid@rac3 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: <CRS home> /u01/app/11.2.0/grid on node(s) rac3,rac1,rac2 End points: TCP:1521 [grid@rac3 ~]$ |
禁用监听器:
1 2 |
[grid@rac3 ~]$ srvctl disable listener -l LISTENER -n rac3 [grid@rac3 ~]$ |
停掉监听器:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[grid@rac3 ~]$ srvctl stop listener -l LISTENER -n rac3 [grid@rac3 ~]$ [grid@rac3 ~]$ ps -ef | grep --color lsnr grid 14517 13945 0 22:43 pts/1 00:00:00 grep --color lsnr [grid@rac3 ~]$ [grid@rac3 ~]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 02-SEP-2014 22:41:17 Copyright (c) 1991, 2013, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 2: No such file or directory [grid@rac3 ~]$ |
4. 更新集群列表
在节点三上操作。
使用oracle操作系统用户执行。
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@rac3 ~]# su - oracle [oracle@rac3 ~]$ cd $ORACLE_HOME/oui/bin/ [oracle@rac3 bin]$ ls -l | grep runInstaller -rwxr-x--- 1 oracle oinstall 164048 Sep 2 00:28 runInstaller -rwxr-x--- 1 oracle oinstall 79 Sep 2 00:28 runInstaller.sh [oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 2015 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [oracle@rac3 bin]$ |
5. 删除节点三的数据库软件
在节点三上操作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
[oracle@rac3 bin]$ cd /u01/app/oracle/product/11.2.0/dbhome_1/deinstall/ [oracle@rac3 deinstall]$ pwd /u01/app/oracle/product/11.2.0/dbhome_1/deinstall [oracle@rac3 deinstall]$ ls bootstrap.pl deinstall deinstall.pl deinstall.xml jlib readme.txt response sshUserSetup.sh [oracle@rac3 deinstall]$ ll total 92 -rwxr-xr-x 1 oracle oinstall 9780 Sep 2 00:27 bootstrap.pl -rwxr-xr-x 1 oracle oinstall 9662 Sep 2 00:27 deinstall -rwxr-xr-x 1 oracle oinstall 18841 Sep 2 00:27 deinstall.pl -rw-r--r-- 1 oracle oinstall 3466 Sep 2 00:27 deinstall.xml drwxr-xr-x 2 oracle oinstall 4096 Sep 2 00:27 jlib -rw-r--r-- 1 oracle oinstall 409 Sep 2 00:27 readme.txt drwxr-xr-x 2 oracle oinstall 4096 Sep 2 00:27 response -rwxr-xr-x 1 oracle oinstall 32343 Sep 2 00:27 sshUserSetup.sh [oracle@rac3 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1 Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database Oracle Base selected for deinstall is: /u01/app/oracle Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid The following nodes are part of this cluster: rac3 Checking for sufficient temp space availability on node(s) : 'rac3' ## [END] Install check configuration ## Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-09-02_10-50-59-PM.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2014-09-02_10-51-03-PM.log Database Check Configuration END Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2014-09-02_10-51-06-PM.log Enterprise Manager Configuration Assistant END Oracle Configuration Manager check START OCM check log file location : /u01/app/oraInventory/logs//ocm_check2193.log Oracle Configuration Manager check END ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac3', and the global configuration will be removed. Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/dbhome_1 Inventory Location where the Oracle home registered is: /u01/app/oraInventory The option -local will not modify any database configuration for this Oracle home. No Enterprise Manager configuration to be updated for any database(s) No Enterprise Manager ASM targets to update No Enterprise Manager listener targets to migrate Checking the config status for CCR Oracle Home exists with CCR directory, but CCR is not configured CCR check is finished Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-09-02_10-50-52-PM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-09-02_10-50-52-PM.err' ######################## CLEAN OPERATION START ######################## Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2014-09-02_10-51-06-PM.log Updating Enterprise Manager ASM targets (if any) Updating Enterprise Manager listener targets (if any) Enterprise Manager Configuration Assistant END Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-09-02_10-51-38-PM.log Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-09-02_10-51-38-PM.log De-configuring Local Net Service Names configuration file... Local Net Service Names configuration file de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END Oracle Configuration Manager clean START OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean2193.log Oracle Configuration Manager clean END Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done Failed to delete the directory '/u01/app/oracle'. The directory is in use. Delete directory '/u01/app/oracle' on the local node : Failed <<<< Oracle Universal Installer cleanup completed with errors. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2014-09-02_10-50-23PM' on node 'rac3' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Cleaning the config for CCR As CCR is not configured, so skipping the cleaning of CCR configuration CCR clean is finished Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node. Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node. Failed to delete directory '/u01/app/oracle' on the local node. Oracle Universal Installer cleanup completed with errors. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END ############# [oracle@rac3 deinstall]$ |
6. 停止节点三的集群件资源:NodeApps
在保留节点中的任意一个节点上操作即可,这里,在节点二上操作。
先在节点三上查看当前的集群件的运行状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[oracle@rac3 app]$ ps -ef | grep d.bin grid 6699 1 0 21:48 ? 00:00:00 /u01/app/11.2.0/grid/bin/evmd.bin root 6725 1 0 21:48 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdmonitor root 6745 1 0 21:48 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdagent grid 6757 1 0 21:48 ? 00:00:05 /u01/app/11.2.0/grid/bin/ocssd.bin root 6829 1 0 21:49 ? 00:00:00 /u01/app/11.2.0/grid/bin/octssd.bin root 6985 1 0 21:49 ? 00:00:06 /u01/app/11.2.0/grid/bin/crsd.bin reboot grid 7099 6699 0 21:49 ? 00:00:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o /u01/app/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/11.2.0/grid/evm/log/evmlogger.log root 7137 1 0 21:49 ? 00:00:03 /u01/app/11.2.0/grid/bin/orarootagent.bin grid 7140 1 0 21:49 ? 00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin root 7701 1 0 21:51 ? 00:00:00 /u01/app/11.2.0/grid/bin/osysmond.bin oracle 19705 14751 0 22:59 pts/1 00:00:00 grep d.bin root 25174 1 0 21:10 ? 00:00:06 /u01/app/11.2.0/grid/bin/ohasd.bin reboot grid 26916 1 0 21:11 ? 00:00:01 /u01/app/11.2.0/grid/bin/oraagent.bin grid 26927 1 0 21:11 ? 00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin grid 26969 1 0 21:11 ? 00:00:01 /u01/app/11.2.0/grid/bin/gpnpd.bin grid 26990 1 0 21:11 ? 00:00:04 /u01/app/11.2.0/grid/bin/gipcd.bin root 27087 1 0 21:12 ? 00:00:08 /u01/app/11.2.0/grid/bin/orarootagent.bin [oracle@rac3 app]$ ps -ef | grep ons grid 7262 1 0 21:50 ? 00:00:00 /u01/app/11.2.0/grid/opmn/bin/ons -d grid 7263 7262 0 21:50 ? 00:00:00 /u01/app/11.2.0/grid/opmn/bin/ons -d oracle 19709 14751 0 22:59 pts/1 00:00:00 grep ons [oracle@rac3 app]$ [grid@rac3 ~]$ crs_stat -t -v | grep rac3 ora....SM3.asm application 0/5 0/0 ONLINE ONLINE rac3 ora.rac3.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac3.ons application 0/3 0/0 ONLINE ONLINE rac3 ora.rac3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac3 [grid@rac3 ~]$ |
在节点二上执行停止节点三NodeApps的操作:
1 2 |
[grid@rac2 ~]$ srvctl stop nodeapps -n rac3 -f [grid@rac2 ~]$ |
停止后的集群件状态:
1 2 3 4 5 6 |
[grid@rac3 ~]$ crs_stat -t -v | grep rac3 ora....SM3.asm application 0/5 0/0 ONLINE ONLINE rac3 ora.rac3.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac3.ons application 0/3 0/0 OFFLINE OFFLINE ora.rac3.vip ora....t1.type 0/0 0/0 OFFLINE OFFLINE [grid@rac3 ~]$ |
7. 更新集群列表
在RAC的保留节点中的每一个上执行。
以oracle操作系统身份执行。
节点一:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[oracle@rac1 ~]$ whoami oracle [oracle@rac1 ~]$ hostname rac1 [oracle@rac1 ~]$ [oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin [oracle@rac1 bin]$ ll total 296 -rwxr-x--- 1 oracle oinstall 227 Aug 31 21:46 addLangs.sh -rwxr-x--- 1 oracle oinstall 645 Aug 31 21:46 addNode.sh -rwxr-x--- 1 oracle oinstall 297 Aug 31 21:46 attachHome.sh -rwxr-x--- 1 oracle oinstall 203 Aug 31 21:46 detachHome.sh -rwxr-xr-x 1 oracle oinstall 658 Jul 11 2013 filesList.bat -rwxr-xr-x 1 oracle oinstall 201 Jul 11 2013 filesList.properties -rwxr-xr-x 1 oracle oinstall 837 Jul 11 2013 filesList.sh -rwxr-x--- 1 oracle oinstall 60809 Aug 25 2013 lsnodes drwxr-xr-x 2 oracle oinstall 4096 Aug 31 21:44 resource -rwxr-x--- 1 oracle oinstall 1368 Aug 31 21:46 runConfig.sh -rwxr-x--- 1 oracle oinstall 164048 Jul 11 2013 runInstaller -rwxr-x--- 1 oracle oinstall 79 Aug 31 21:46 runInstaller.sh -rwxr-x--- 1 oracle oinstall 21788 Jul 11 2013 runSSHSetup.sh [oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 1842 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [oracle@rac1 bin]$ |
节点二:
1 2 3 4 5 6 7 8 9 10 |
[root@rac2 ~]# su - oracle [oracle@rac2 ~]$ cd $ORACLE_HOME/oui/bin [oracle@rac2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 1234 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [oracle@rac2 bin]$ |
8. 节点三,CRS:deconfig,清空CRS。
节点三操作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
[root@rac3 ~]# whoami root [root@rac3 ~]# hostname rac3 [root@rac3 ~]# ls -l /u01/app/11.2.0/grid/crs/install/ total 8340 -rwxr-xr-x 1 grid oinstall 1276 Sep 2 00:10 cmdllroot.sh -rwxr-xr-x 1 grid oinstall 739 Sep 1 23:44 crsconfig_addparams -r-xr-xr-x 1 grid oinstall 797 Sep 1 23:44 crsconfig_addparams.sbs -rwxr-xr-x 1 root oinstall 497715 Sep 1 23:44 crsconfig_lib.pm -rwxr-xr-x 1 grid oinstall 4145 Sep 1 23:44 crsconfig_params -rwxr-xr-x 1 grid oinstall 4877 Sep 1 23:44 crsconfig_params.sbs -rwxr-xr-x 1 root oinstall 41420 Sep 1 23:44 crsdelete.pm -rwxr-xr-x 1 root oinstall 24317 Sep 1 23:44 crspatch.pm -rwxr-xr-x 1 root oinstall 8334 Sep 1 23:44 hasdconfig.pl -rw-r--r-- 1 grid oinstall 68 Sep 1 23:44 inittab -rwxr-xr-x 1 grid oinstall 115 Sep 1 23:44 install.excl -rwxr-xr-x 1 grid oinstall 0 Sep 1 23:44 install.incl -rwxr-xr-x 1 grid oinstall 17 Sep 1 23:44 installRemove.excl -r-xr-xr-- 1 grid oinstall 2156 Sep 2 00:10 onsconfig -rwxr-xr-x 1 root oinstall 25147 Sep 1 23:44 oraacfs.pm -rw-r--r-- 1 grid oinstall 220 Sep 1 23:44 oracle-ohasd.conf -rwxr-xr-x 1 root oinstall 13478 Sep 1 23:44 oracss.pm -rw-r--r-- 1 grid oinstall 414 Sep 1 23:44 paramfile.crs -rw-r--r-- 1 grid oinstall 87 Sep 1 23:44 ParentDirPerm_rac2.txt -rw-r--r-- 1 root oinstall 87 Sep 2 00:10 ParentDirPerm_rac3.txt -rwxr-xr-x 1 root oinstall 5344 Sep 2 00:10 preupdate.sh -rwxr-xr-x 1 root oinstall 36870 Sep 1 23:44 rootcrs.pl -rwxr-xr-x 1 root oinstall 17679 Sep 1 23:44 roothas.pl -rwxr-xr-x 1 root oinstall 915 Sep 1 23:44 rootofs.sh -rwxr-xr-x 1 grid oinstall 3278 Sep 1 23:44 s_crsconfig_defs -rwxr-xr-x 1 root oinstall 102572 Sep 1 23:44 s_crsconfig_lib.pm -rwxr-x--- 1 grid oinstall 403 Sep 1 23:44 s_crsconfig_rac2_env.txt -rwxr-x--- 1 root oinstall 403 Sep 2 00:10 s_crsconfig_rac3_env.txt -rwxr-xr-x 1 root oinstall 7636861 Sep 1 23:44 tfa_setup.sh [root@rac3 ~]# ls -l /u01/app/11.2.0/grid/crs/install/ | grep rootcrs.pl -rwxr-xr-x 1 root oinstall 36870 Sep 1 23:44 rootcrs.pl [root@rac3 ~]# [root@rac3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Network exists: 1/192.168.56.0/255.255.255.0/eth0, type static VIP exists: /rac1-vip/192.168.56.165/192.168.56.0/255.255.255.0/eth0, hosting node rac1 VIP exists: /rac2-vip/192.168.56.166/192.168.56.0/255.255.255.0/eth0, hosting node rac2 VIP exists: /rac3-vip/192.168.56.167/192.168.56.0/255.255.255.0/eth0, hosting node rac3 GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 PRKO-2426 : ONS is already stopped on node(s): rac3 PRKO-2425 : VIP is already stopped on node(s): rac3 PRKO-2440 : Network resource is already stopped. CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac3' CRS-2677: Stop of 'ora.registry.acfs' on 'rac3' succeeded CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3' CRS-2673: Attempting to stop 'ora.crsd' on 'rac3' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3' CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac3' CRS-2673: Attempting to stop 'ora.DATA01.dg' on 'rac3' CRS-2677: Stop of 'ora.DATA01.dg' on 'rac3' succeeded CRS-2677: Stop of 'ora.CRS.dg' on 'rac3' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac3' CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac3' CRS-2673: Attempting to stop 'ora.crf' on 'rac3' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3' CRS-2673: Attempting to stop 'ora.evmd' on 'rac3' CRS-2673: Attempting to stop 'ora.asm' on 'rac3' CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded CRS-2677: Stop of 'ora.drivers.acfs' on 'rac3' succeeded CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac3' CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3' CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3' CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed CRS-4133: Oracle High Availability Services has been stopped. Removing Trace File Analyzer Successfully deconfigured Oracle clusterware stack on this node [root@rac3 ~]# |
删除成功后的状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 |
[root@rac3 ~]# ps -ef | grep d.bin root 24992 7227 0 23:17 pts/1 00:00:00 grep d.bin [root@rac3 ~]# ps -ef | grep crs root 24994 7227 0 23:17 pts/1 00:00:00 grep crs [root@rac3 ~]# [grid@rac2 ~]$ crs_stat -v NAME=ora.CRS.dg TYPE=ora.diskgroup.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.DATA01.dg TYPE=ora.diskgroup.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.LISTENER.lsnr TYPE=ora.listener.type ENABLED@SERVERNAME(rac3)=0 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.asm TYPE=ora.asm.type GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=+ASM1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=+ASM2 GEN_USR_ORA_INST_NAME@SERVERNAME(rac3)=+ASM3 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.ceph.db TYPE=ora.database.type GEN_START_OPTIONS@SERVERNAME(rac1)=open GEN_START_OPTIONS@SERVERNAME(rac2)=open GEN_START_OPTIONS@SERVERNAME(rac3)=open GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=ceph1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=ceph2 RESTART_ATTEMPTS=2 RESTART_COUNT=0 USR_ORA_INST_NAME@SERVERNAME(rac1)=ceph1 USR_ORA_INST_NAME@SERVERNAME(rac2)=ceph2 FAILURE_THRESHOLD=1 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.cvu TYPE=ora.cvu.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.gsd TYPE=ora.gsd.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=OFFLINE STATE=OFFLINE NAME=ora.net1.network TYPE=ora.network.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.oc4j TYPE=ora.oc4j.type RESTART_ATTEMPTS=1 RESTART_COUNT=0 FAILURE_THRESHOLD=2 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.ons TYPE=ora.ons.type RESTART_ATTEMPTS=3 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.ASM1.asm TYPE=application GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=+ASM1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=+ASM2 GEN_USR_ORA_INST_NAME@SERVERNAME(rac3)=+ASM3 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.LISTENER_RAC1.lsnr TYPE=application ENABLED@SERVERNAME(rac3)=0 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.gsd TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=OFFLINE STATE=OFFLINE NAME=ora.rac1.ons TYPE=application RESTART_ATTEMPTS=3 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac1.vip TYPE=ora.cluster_vip_net1.type RESTART_ATTEMPTS=0 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=1 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.rac2.ASM2.asm TYPE=application GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=+ASM1 GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=+ASM2 GEN_USR_ORA_INST_NAME@SERVERNAME(rac3)=+ASM3 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.rac2.LISTENER_RAC2.lsnr TYPE=application ENABLED@SERVERNAME(rac3)=0 RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.rac2.gsd TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=OFFLINE STATE=OFFLINE NAME=ora.rac2.ons TYPE=application RESTART_ATTEMPTS=3 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.rac2.vip TYPE=ora.cluster_vip_net1.type RESTART_ATTEMPTS=0 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=1 TARGET=ONLINE STATE=ONLINE on rac2 NAME=ora.registry.acfs TYPE=ora.registry.acfs.type RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD= FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac1 NAME=ora.scan1.vip TYPE=ora.scan_vip.type RESTART_ATTEMPTS=0 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=1 TARGET=ONLINE STATE=ONLINE on rac2 [grid@rac2 ~]$ |
正常的情况下,所有与节点三有关的集群件资源都应该被删除了。
但仍然有可能出现如下意外:节点三的VIP资源无法通过上面的操作删除干净。
这时候,需要手动删除:
1 2 3 |
srvctl stop vip -i ora.rac3.vip -f srvctl remove vip -i ora.rac3.vip -f crsctl delete resource ora.rac3.vip -f |
具体的相关资源名字,可以通过“crs_stat -v”查看到,该命令的反馈信息上文有所展示。
9. 将节点三从CRS中删除
在RAC的保留节点中的任意一个上操作即可,这里选择节点二。
以root操作系统用户操作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
[root@rac2 ~]# hostname rac2 [root@rac2 ~]# whoami root [root@rac2 ~]# /u01/app/11.2.0/grid/bin/olsnodes -t -s rac1 Active Unpinned rac2 Active Unpinned rac3 Inactive Unpinned [root@rac2 ~]# [root@rac2 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n rac3 CRS-4661: Node rac3 successfully deleted. [root@rac2 ~]# /u01/app/11.2.0/grid/bin/olsnodes -t -s rac1 Active Unpinned rac2 Active Unpinned [root@rac2 ~]# |
10. 更新集群列表
在节点三上,以grid操作系统用户登录操作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[root@rac3 app]# hostname rac3 [root@rac3 app]# whoami root [root@rac3 app]# su - grid [grid@rac3 ~]$ cd $ORACLE_HOME/oui/bin [grid@rac3 bin]$ ll total 296 -rwxr-x--- 1 grid oinstall 215 Sep 1 23:46 addLangs.sh -rwxr-x--- 1 grid oinstall 626 Sep 1 23:46 addNode.sh -rwxr-x--- 1 grid oinstall 285 Sep 1 23:46 attachHome.sh -rwxr-x--- 1 grid oinstall 191 Sep 1 23:46 detachHome.sh -rwxr-xr-x 1 grid oinstall 658 Sep 1 23:46 filesList.bat -rwxr-xr-x 1 grid oinstall 201 Sep 1 23:46 filesList.properties -rwxr-xr-x 1 grid oinstall 837 Sep 1 23:46 filesList.sh -rwxr-x--- 1 grid oinstall 60809 Sep 1 23:46 lsnodes drwxr-xr-x 2 grid oinstall 4096 Sep 1 23:46 resource -rwxr-x--- 1 grid oinstall 1159 Sep 1 23:46 runConfig.sh -rwxr-x--- 1 grid oinstall 164048 Sep 1 23:46 runInstaller -rwxr-x--- 1 grid oinstall 60 Sep 1 23:46 runInstaller.sh -rwxr-x--- 1 grid oinstall 21788 Sep 1 23:46 runSSHSetup.sh [grid@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" CRS=true -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 2015 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [grid@rac3 bin]$ |
11. 节点三,删除集群件软件
节点三上,以grid操作系统用户执行。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[grid@rac3 bin]$ whoami grid [grid@rac3 bin]$ cd $ORACLE_HOME/deinstall [grid@rac3 deinstall]$ ll total 92 -rwxr-xr-x 1 grid oinstall 9780 Sep 1 23:46 bootstrap.pl -rwxr-xr-x 1 grid oinstall 9643 Sep 1 23:46 deinstall -rwxr-xr-x 1 grid oinstall 18841 Sep 1 23:46 deinstall.pl -rw-r--r-- 1 grid oinstall 3466 Sep 1 23:46 deinstall.xml drwxr-xr-x 2 grid oinstall 4096 Sep 1 23:46 jlib -rw-r--r-- 1 grid oinstall 409 Sep 1 23:46 readme.txt drwxr-xr-x 2 grid oinstall 4096 Sep 1 23:46 response -rwxr-xr-x 1 grid oinstall 32343 Sep 1 23:46 sshUserSetup.sh [grid@rac3 deinstall]$ |
执行deinstall:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
[grid@rac3 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /tmp/deinstall2014-09-02_11-38-16PM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /u01/app/grid Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: rac3 Checking for sufficient temp space availability on node(s) : 'rac3' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2014-09-02_11-38-16PM/logs//crsdc.log Enter an address or the name of the virtual IP used on node "rac3"[rac3-vip] > The following information can be collected by running "/sbin/ifconfig -a" on node "rac3" Enter the IP netmask of Virtual IP "192.168.56.167" on node "rac3"[255.255.255.0] > Enter the network interface name on which the virtual IP address "192.168.56.167" is active > Enter an address or the name of the virtual IP[] > Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2014-09-02_11-38-16PM/logs/netdc_check2014-09-02_11-38-34-PM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]: Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2014-09-02_11-38-16PM/logs/asmcadc_check2014-09-02_11-38-35-PM.log ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac3', and the global configuration will be removed. Oracle Home selected for deinstall is: /u01/app/11.2.0/grid Inventory Location where the Oracle home registered is: /u01/app/oraInventory Following RAC listener(s) will be de-configured: LISTENER Option -local will not modify any ASM configuration. Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/tmp/deinstall2014-09-02_11-38-16PM/logs/deinstall_deconfig2014-09-02_11-38-21-PM.out' Any error messages from this session will be written to: '/tmp/deinstall2014-09-02_11-38-16PM/logs/deinstall_deconfig2014-09-02_11-38-21-PM.err' ######################## CLEAN OPERATION START ######################## ASM de-configuration trace file location: /tmp/deinstall2014-09-02_11-38-16PM/logs/asmcadc_clean2014-09-02_11-38-37-PM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2014-09-02_11-38-16PM/logs/netdc_clean2014-09-02_11-38-37-PM.log De-configuring RAC listener(s): LISTENER De-configuring listener: LISTENER Stopping listener on node "rac3": LISTENER Warning: Failed to stop listener. Listener may not be running. Listener de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "rac3". /tmp/deinstall2014-09-02_11-38-16PM/perl/bin/perl -I/tmp/deinstall2014-09-02_11-38-16PM/perl/lib -I/tmp/deinstall2014-09-02_11-38-16PM/crs/install /tmp/deinstall2014-09-02_11-38-16PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-02_11-38-16PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Press Enter after you finish running the above commands <---------------------------------------- Remove the directory: /tmp/deinstall2014-09-02_11-38-16PM on node: Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/u01/app/11.2.0/grid' on the local node : Done Delete directory '/u01/app/oraInventory' on the local node : Done Delete directory '/u01/app/grid' on the local node : Done Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2014-09-02_11-38-16PM' on node 'rac3' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Following RAC listener(s) were de-configured successfully: LISTENER Oracle Clusterware is stopped and successfully de-configured on node "rac3" Oracle Clusterware is stopped and de-configured successfully. Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node. Successfully deleted directory '/u01/app/11.2.0/grid' on the local node. Successfully deleted directory '/u01/app/oraInventory' on the local node. Successfully deleted directory '/u01/app/grid' on the local node. Oracle Universal Installer cleanup was successful. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac3' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac3' at the end of the session. Run 'rm -rf /etc/oratab' as root on node(s) 'rac3' at the end of the session. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END ############# [grid@rac3 deinstall]$ |
在上面的LOG中,“Run the following command as the root user or the administrator on node “rac3”.”阶段会需要再打开一个终端,连入节点三,执行脚本反馈的命令。
它的细节如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[root@rac3 ~]# /tmp/deinstall2014-09-02_11-38-16PM/perl/bin/perl -I/tmp/deinstall2014-09-02_11-38-16PM/perl/lib -I/tmp/deinstall2014-09-02_11-38-16PM/crs/install /tmp/deinstall2014-09-02_11-38-16PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-09-02_11-38-16PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Using configuration parameter file: /tmp/deinstall2014-09-02_11-38-16PM/response/deinstall_Ora11g_gridinfrahome1.rsp ****Unable to retrieve Oracle Clusterware home. Start Oracle Clusterware stack and try again. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. Either /etc/oracle/ocr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Either /etc/oracle/ocr.loc does not exist or is not readable Make sure the file exists and it has read and execute access CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Modify failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Delete failed, or completed with errors. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. ################################################################ # You must kill processes or reboot the system to properly # # cleanup the processes started by Oracle clusterware # ################################################################ ACFS-9313: No ADVM/ACFS installation detected. Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall error: package cvuqdisk is not installed Successfully deconfigured Oracle clusterware stack on this node [root@rac3 ~]# |
上面两个脚本执行成功后,需要在节点三上删除三个文件:
Run ‘rm -rf /etc/oraInst.loc’ as root on node(s) ‘rac3’ at the end of the session.
Run ‘rm -rf /opt/ORCLfmap’ as root on node(s) ‘rac3’ at the end of the session.
Run ‘rm -rf /etc/oratab’ as root on node(s) ‘rac3’ at the end of the session.
Log:
1 2 3 4 |
[root@rac3 ~]# rm -rf /etc/oraInst.loc [root@rac3 ~]# rm -rf /opt/ORCLfmap [root@rac3 ~]# rm -rf /etc/oratab [root@rac3 ~]# |
12. 更新集群列表
在RAC的所有保留节点上,以grid操作系统用户执行。
节点一:
1 2 3 4 5 6 7 8 9 |
[root@rac1 ~]# su - grid [grid@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2}" CRS=true Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 1829 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [grid@rac1 ~]$ |
节点二:
1 2 3 4 5 6 7 8 9 |
[root@rac2 ~]# su - grid [grid@rac2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2}" CRS=true Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 1200 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [grid@rac2 ~]$ |
13. 确认节点三被成功移出RAC
集群中保留节点任意一个上操作即可。
CRSCTL:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
[grid@rac2 ~]$ crsctl status resource -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.CRS.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.DATA01.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.registry.acfs ONLINE ONLINE rac1 ONLINE ONLINE rac2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac2 ora.ceph.db 1 ONLINE ONLINE rac1 Open 2 ONLINE ONLINE rac2 Open ora.cvu 1 ONLINE ONLINE rac2 ora.oc4j 1 ONLINE ONLINE rac2 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac2 [grid@rac2 ~]$ |
SQL*Plus:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[root@rac1 ~]# su - oracle [oracle@rac1 ~]$ export ORACLE_SID=ceph1 [oracle@rac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 2 23:52:52 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> col instance for a9 SQL> select thread#,status,instance from v$thread; THREAD# STATUS INSTANCE ---------- ------ --------- 1 OPEN ceph1 2 OPEN ceph2 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@rac1 ~]$ |
14. 终了。
至此,Oracle database 11g RAC正常移除节点的操作就完成了。
——————————————————————————————————————
Ending。