对于Oracle RAC来说,OCR中存放了关于RAC的节点、实例以及各种相关资源的元信息,非常的重要。
而Oracle OCFS2比起裸设备(RAW)来说,又有太多的问题,非常的不稳定。
所以,就涉及到了如何将OCR从OCFS2移植到RAW的问题。

如题所示,本文呈现关于Oracle RAC 10g中,将OCR、Voting Disk从OCFS2迁移到RAW的技术细节。
————————————————
环境说明
共享存储:
Openfiler 2.99
IP:192.168.56.130

节点一:
主机名:rac1
IP:192.168.56.11

节点二:
主机名:rac2
IP:192.168.56.12
————————————————————

当前,RAC的资源状态:

[root@rac1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    ONLINE    ONLINE    rac1        
ora....l1.inst application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....l2.inst application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1        
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2        
[root@rac1 ~]#

OCFS2的资源与服务运行状态:
节点一:

[root@rac1 ~]# mounted.ocfs2 -d
Device                FS     UUID                                  Label
/dev/sdc1             ocfs2  ca6c39d0-56fc-4a1d-b960-93d7ecc5af94  oracle
[root@rac1 ~]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdc1             ocfs2  rac1, rac2
[root@rac1 ~]# 

节点二:

[root@rac2 ~]# mounted.ocfs2 -d
Device                FS     UUID                                  Label
/dev/sdb1             ocfs2  ca6c39d0-56fc-4a1d-b960-93d7ecc5af94  oracle
[root@rac2 ~]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdb1             ocfs2  rac1, rac2
[root@rac2 ~]# 

OCFS2文件系统的状态:

[root@rac1 ~]# ll /ocfs/*
total 14076
-rw-r----- 1 root oinstall  4173824 Sep 17 23:53 ocr
-rw-r--r-- 1 grid oinstall 10240000 Sep 25 21:06 votedisk
[root@rac1 ~]# 

O2CB的状态:
节点一:

[root@rac1 ~]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 61
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 3000
Checking O2CB heartbeat: Active
[root@rac1 ~]# 

节点二:

[root@rac2 ~]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 61
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 3000
Checking O2CB heartbeat: Active
[root@rac2 ~]# 

Oracle ASMLib的状态:
节点一:

[root@rac1 ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@rac1 ~]# /etc/init.d/oracleasm listdisks
DATA01
[root@rac1 ~]# ll /dev/oracleasm/disks/*
brw-rw---- 1 oracle oinstall 8, 17 Sep 25 21:05 /dev/oracleasm/disks/DATA01
[root@rac1 ~]#

节点二:

[root@rac2 ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@rac2 ~]# /etc/init.d/oracleasm listdisks
DATA01
[root@rac2 ~]# ll /dev/oracleasm/disks/*
brw-rw---- 1 oracle oinstall 8, 33 Sep 25 21:05 /dev/oracleasm/disks/DATA01
[root@rac2 ~]# 

一、共享存储上创建一个新的LUN:raw10g,并让RAC10g的两个节点访问到它。
openfiler_lun_raw10g
节点一、二上添加LUN(两个节点都是一样的操作,以下仅演示在节点以上的操作步骤)
节点一)

[root@rac1 ~]# fdisk -l | grep --color "Disk "
Disk /dev/sda: 21.4 GB, 21474836480 bytes
Disk /dev/sdb: 10.2 GB, 10234101760 bytes
Disk /dev/sdc: 1006 MB, 1006632960 bytes
[root@rac1 ~]# 
[root@rac1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.56.130
192.168.56.130:3260,1 ocfs
192.168.56.130:3260,1 ora10gdata01
192.168.56.130:3260,1 raw10g
[root@rac1 ~]# 
[root@rac1 ~]# iscsiadm -m node -T raw10g -p 192.168.56.130 -l
Logging in to [iface: default, target: raw10g, portal: 192.168.56.130,3260]
Login to [iface: default, target: raw10g, portal: 192.168.56.130,3260]: successful
[root@rac1 ~]# 
[root@rac1 ~]# fdisk -l | grep --color "Disk "
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sda: 21.4 GB, 21474836480 bytes
Disk /dev/sdb: 10.2 GB, 10234101760 bytes
Disk /dev/sdc: 1006 MB, 1006632960 bytes
Disk /dev/sdd: 20.4 GB, 20468203520 bytes
[root@rac1 ~]#

*** 配置共享存储(仅需在其中一个节点操作即可)
节点二)

[root@rac2 ~]# fdisk -l | grep --color "Disk "
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sda: 21.4 GB, 21474836480 bytes
Disk /dev/sdb: 10.2 GB, 10234101760 bytes
Disk /dev/sdc: 1006 MB, 1006632960 bytes
Disk /dev/sdd: 20.4 GB, 20468203520 bytes
[root@rac2 ~]# 
[root@rac2 ~]# fdisk /dev/sdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 19520.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/sdd: 20.4 GB, 20468203520 bytes
64 heads, 32 sectors/track, 19520 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-19520, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-19520, default 19520): +512M

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (490-19520, default 490): 
Using default value 490
Last cylinder or +size or +sizeM or +sizeK (490-19520, default 19520): +512M

Command (m for help): p

Disk /dev/sdd: 20.4 GB, 20468203520 bytes
64 heads, 32 sectors/track, 19520 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         489      500720   83  Linux
/dev/sdd2             490         978      500736   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac2 ~]# 
[root@rac2 ~]# partprobe /dev/sdd
[root@rac2 ~]# 
[root@rac2 ~]# fdisk -l /dev/sdd

Disk /dev/sdd: 20.4 GB, 20468203520 bytes
64 heads, 32 sectors/track, 19520 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         489      500720   83  Linux
/dev/sdd2             490         978      500736   83  Linux
[root@rac2 ~]# 

节点一)

[root@rac1 crs]# fdisk -l | grep --color "Disk "
Disk /dev/sda: 21.4 GB, 21474836480 bytes
Disk /dev/sdb: 10.2 GB, 10234101760 bytes
Disk /dev/sdc: 1006 MB, 1006632960 bytes
Disk /dev/sdd: 20.4 GB, 20468203520 bytes
[root@rac1 crs]# partprobe /dev/sdd
[root@rac1 crs]# fdisk -l /dev/sdd

Disk /dev/sdd: 20.4 GB, 20468203520 bytes
64 heads, 32 sectors/track, 19520 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         489      500720   83  Linux
/dev/sdd2             490         978      500736   83  Linux
[root@rac1 crs]#

*** 配置裸设备(两个节点都需要操作,这里仅以节点一的操作为例)

[root@rac1 crs]# raw -h
Usage:
  raw /dev/raw/rawN  
  raw /dev/raw/rawN /dev/
  raw -q /dev/raw/rawN
  raw -qa
[root@rac1 crs]# 
[root@rac1 script]# pwd
/script
[root@rac1 script]# ll
total 4
-rw-r--r-- 1 root root 76 Sep 25 23:08 rawdevices.sh
[root@rac1 script]# cat rawdevices.sh 
# OCR
raw /dev/raw/raw1 /dev/sdd1
# Voting Disk
raw /dev/raw/raw2 /dev/sdd2
[root@rac1 script]# 
[root@rac1 script]# ll /dev | grep --color raw
crw------- 1 root root   162,   0 Sep 25 21:51 rawctl
[root@rac1 script]# sh rawdevices.sh 
/dev/raw/raw1:  bound to major 8, minor 49
/dev/raw/raw2:  bound to major 8, minor 50
[root@rac1 script]# ll /dev | grep --color raw
drwxr-xr-x 2 root root         80 Sep 25 23:09 raw
crw------- 1 root root   162,   0 Sep 25 21:51 rawctl
[root@rac1 script]# ll /dev/raw/*
crw------- 1 root root 162, 1 Sep 25 23:09 /dev/raw/raw1
crw------- 1 root root 162, 2 Sep 25 23:09 /dev/raw/raw2
[root@rac1 script]# 
[root@rac1 script]# cat /etc/udev/rules.d/60-raw.rules 
# This file and interface are deprecated.
# Applications needing raw device access should open regular
# block devices with O_DIRECT.
#
# Enter raw device bindings here.
#
# An example would be:
#   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
#   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8, minor 1.
KERNEL=="raw*",OWNER="grid",GROUP="oinstall",MODE="0660"
[root@rac1 script]# 
[root@rac1 script]# start_udev 
Starting udev:                                             [  OK  ]
[root@rac1 script]# 
[root@rac1 script]# ll /dev/raw/
total 0
crw-rw---- 1 grid oinstall 162, 1 Sep 25 23:09 raw1
crw-rw---- 1 grid oinstall 162, 2 Sep 25 23:09 raw2
[root@rac1 script]# 

二、OCR部分
1)OCR状态
查看OCR的运行状态:

[root@rac1 ~]# /u01/app/10g/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3816
         Available space (kbytes) :     258304
         ID                       :  649874475
         Device/File Name         : /ocfs/clusterware/ocr
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

[root@rac1 ~]#

2)OCR自动备份
查看OCR自动备份的情况:

[root@rac1 ~]# /u01/app/10g/grid/bin/ocrconfig -showbackup

rac2     2014/09/21 23:03:50     /u01/app/10g/grid/cdata/crs

          

          

rac2     2014/09/21 23:03:50     /u01/app/10g/grid/cdata/crs

rac2     2014/09/21 23:03:50     /u01/app/10g/grid/cdata/crs
[root@rac1 ~]#

在ORACLE RAC的运行过程中,每小时自动备份会被执行一次。
并且,默认情况下:
1.自动备份只会在RAC的一个节点上执行。如果用于备份的节点出现异常,则RAC会自动切换到其他可用节点进行备份。
2.自动备份默认会保留最近的5份存档:
其中三份为最近的
其中一份是昨天的
另一份是上周的
3.OCR的自动备份时间与频率无法更改(不论默认与否都是无法更改OCR自动备份策略的)

[root@rac2 ~]# ll /u01/app/10g/grid/cdata/crs/
total 11856
-rw-r--r-- 1 root root 4042752 Sep 21 23:03 backup00.ocr
-rw-r--r-- 1 root root 4042752 Sep 21 23:03 day.ocr
-rw-r--r-- 1 root root 4042752 Sep 21 23:03 week.ocr
[root@rac2 ~]#
[root@rac2 ~]# du -sh /u01/app/10g/grid/cdata/crs/*
3.9M    /u01/app/10g/grid/cdata/crs/backup00.ocr
3.9M    /u01/app/10g/grid/cdata/crs/day.ocr
3.9M    /u01/app/10g/grid/cdata/crs/week.ocr
[root@rac2 ~]# 

3)OCR手动备份
除了上面说的自动备份OCR,OCR的备份也可以手动执行。

[root@rac1 crs]# pwd
/u01/app/10g/grid/cdata/crs
[root@rac1 crs]# ll
total 0
[root@rac1 crs]# /u01/app/10g/grid/bin/ocrconfig -export exp_20140925_1.ocr
[root@rac1 crs]# ll
total 88
-rw-r--r-- 1 root root 84760 Sep 25 22:33 exp_20140925_1.ocr
[root@rac1 crs]# du -sh *
88K     exp_20140925_1.ocr
[root@rac1 crs]# 

需要注意的是:OCR的导出只能以root用户身份执行。

4)将原OCFS2的OCR替换为RAW的OCR

[root@rac1 ~]# /u01/app/10g/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3816
         Available space (kbytes) :     258304
         ID                       :  649874475
         Device/File Name         : /ocfs/clusterware/ocr
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

[root@rac1 ~]# 
[root@rac1 ~]# touch /ocfs/clusterware/ocr_mirror_1
[root@rac1 ~]# /u01/app/10g/grid/bin/ocrconfig -replace ocrmirror /ocfs/clusterware/ocr_mirror_1 
[root@rac1 ~]# /u01/app/10g/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3816
         Available space (kbytes) :     258304
         ID                       :  649874475
         Device/File Name         : /ocfs/clusterware/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /ocfs/clusterware/ocr_mirror_1
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/ocrconfig -replace ocr /dev/raw/raw1 
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3816
         Available space (kbytes) :     258304
         ID                       :  649874475
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
         Device/File Name         : /ocfs/clusterware/ocr_mirror_1
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/ocrconfig -replace ocrmirror
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3816
         Available space (kbytes) :     258304
         ID                       :  649874475
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

[root@rac1 ~]# 

至此,OCR的替换已经完成。

三、Voting Disk部分
查看当前的Voting Disk状态:

[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl query css votedisk
 0.     0    /ocfs/clusterware/votedisk

located 1 votedisk(s).
[root@rac1 ~]#

在继续对Voting Disk作出更改前,需要首先停掉Oracle Clusterware。

[root@rac1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    ONLINE    ONLINE    rac2        
ora....l1.inst application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....l2.inst application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1        
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2        
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crs_stop -all
Attempting to stop `ora.rac1.gsd` on member `rac1`
Attempting to stop `ora.rac1.ons` on member `rac1`
Attempting to stop `ora.rac2.gsd` on member `rac2`
Attempting to stop `ora.rac2.ons` on member `rac2`
Attempting to stop `ora.orcl.db` on member `rac2`
Stop of `ora.rac1.gsd` on member `rac1` succeeded.
Stop of `ora.rac2.gsd` on member `rac2` succeeded.
Stop of `ora.rac2.ons` on member `rac2` succeeded.
Stop of `ora.rac1.ons` on member `rac1` succeeded.
Stop of `ora.orcl.db` on member `rac2` succeeded.
`ora.orcl.orcl1.inst` is already OFFLINE.
`ora.orcl.orcl2.inst` is already OFFLINE.
Attempting to stop `ora.rac1.ASM1.asm` on member `rac1`
Attempting to stop `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1`
Attempting to stop `ora.rac2.ASM2.asm` on member `rac2`
Attempting to stop `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2`
Stop of `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` succeeded.
Attempting to stop `ora.rac1.vip` on member `rac1`
Stop of `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2` succeeded.
Attempting to stop `ora.rac2.vip` on member `rac2`
Stop of `ora.rac1.vip` on member `rac1` succeeded.
Stop of `ora.rac2.vip` on member `rac2` succeeded.
Stop of `ora.rac1.ASM1.asm` on member `rac1` succeeded.
Stop of `ora.rac2.ASM2.asm` on member `rac2` succeeded.
CRS-0216: Could not stop resource 'ora.orcl.orcl1.inst'.

CRS-0216: Could not stop resource 'ora.orcl.orcl2.inst'.

[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    OFFLINE   OFFLINE               
ora....l1.inst application    0/5    0/0    OFFLINE   OFFLINE               
ora....l2.inst application    0/5    0/0    OFFLINE   OFFLINE               
ora....SM1.asm application    0/5    0/0    OFFLINE   OFFLINE               
ora....C1.lsnr application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac1.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac1.ons   application    0/3    0/0    OFFLINE   OFFLINE               
ora.rac1.vip   application    0/0    0/0    OFFLINE   OFFLINE               
ora....SM2.asm application    0/5    0/0    OFFLINE   OFFLINE               
ora....C2.lsnr application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac2.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac2.ons   application    0/3    0/0    OFFLINE   OFFLINE               
ora.rac2.vip   application    0/0    0/0    OFFLINE   OFFLINE               
[root@rac1 ~]# 

将原OCFS2的Votedisk替换为RAW
节点一)

[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl query css votedisk
 0.     0    /ocfs/clusterware/votedisk

located 1 votedisk(s).
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl delete css votedisk /ocfs/clusterware/votedisk -force
only one votedisk /ocfs/clusterware/votedisk configured.
please add another disk before removing the last one.
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl add css votedisk /dev/raw/raw2 
Cluster is not in a ready state for online disk addition
[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl add css votedisk /dev/raw/raw2 -force
Now formatting voting disk: /dev/raw/raw2
successful addition of votedisk /dev/raw/raw2.
[root@rac1 ~]#
[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl query css votedisk
 0.     0    /ocfs/clusterware/votedisk
 1.     0    /dev/raw/raw2

located 2 votedisk(s).
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl delete css votedisk /ocfs/clusterware/votedisk 
Cluster is not in a ready state for online disk removal
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl delete css votedisk /ocfs/clusterware/votedisk -force
successful deletion of votedisk /ocfs/clusterware/votedisk.
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crsctl query css votedisk
 0.     0    /dev/raw/raw2

located 1 votedisk(s).
[root@rac1 ~]# 

节点二)
节点一修改完了之后,节点二确认修改后的集群OCR与Votedisk的状态:

[root@rac2 script]# /u01/app/10g/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3816
         Available space (kbytes) :     258304
         ID                       :  649874475
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

[root@rac2 script]# 
[root@rac2 script]# /u01/app/10g/grid/bin/crsctl query css votedisk
 0.     0    /dev/raw/raw2

located 1 votedisk(s).
[root@rac2 script]# 

四、启动ORACLE RAC
测试修改OCR与Votedisk后,集群是否可以正常启动。
仅需要在其中一个节点上操作即可,这里选择节点一:

[root@rac1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    OFFLINE   OFFLINE               
ora....l1.inst application    0/5    0/0    OFFLINE   OFFLINE               
ora....l2.inst application    0/5    0/0    OFFLINE   OFFLINE               
ora....SM1.asm application    0/5    0/0    OFFLINE   OFFLINE               
ora....C1.lsnr application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac1.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac1.ons   application    0/3    0/0    OFFLINE   OFFLINE               
ora.rac1.vip   application    0/0    0/0    OFFLINE   OFFLINE               
ora....SM2.asm application    0/5    0/0    OFFLINE   OFFLINE               
ora....C2.lsnr application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac2.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac2.ons   application    0/3    0/0    OFFLINE   OFFLINE               
ora.rac2.vip   application    0/0    0/0    OFFLINE   OFFLINE               
[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crs_start -all
Attempting to start `ora.rac1.ASM1.asm` on member `rac1`
Attempting to start `ora.rac1.vip` on member `rac1`
Attempting to start `ora.rac2.vip` on member `rac2`
Attempting to start `ora.rac2.ASM2.asm` on member `rac2`
Start of `ora.rac1.vip` on member `rac1` succeeded.
Start of `ora.rac2.vip` on member `rac2` succeeded.
Attempting to start `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1`
Attempting to start `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2`
Start of `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` succeeded.
Start of `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2` succeeded.
Start of `ora.rac1.ASM1.asm` on member `rac1` succeeded.
Attempting to start `ora.orcl.orcl1.inst` on member `rac1`
Start of `ora.rac2.ASM2.asm` on member `rac2` succeeded.
Attempting to start `ora.orcl.orcl2.inst` on member `rac2`
Start of `ora.orcl.orcl1.inst` on member `rac1` succeeded.
Start of `ora.orcl.orcl2.inst` on member `rac2` succeeded.
CRS-1002: Resource 'ora.rac1.ons' is already running on member 'rac1'

CRS-1002: Resource 'ora.rac2.ons' is already running on member 'rac2'

CRS-1002: Resource 'ora.orcl.db' is already running on member 'rac2'

Attempting to start `ora.rac1.gsd` on member `rac1`
Attempting to start `ora.rac2.gsd` on member `rac2`
Start of `ora.rac1.gsd` on member `rac1` succeeded.
Start of `ora.rac2.gsd` on member `rac2` succeeded.
CRS-0223: Resource 'ora.orcl.db' has placement error.

CRS-0223: Resource 'ora.rac1.ons' has placement error.

CRS-0223: Resource 'ora.rac2.ons' has placement error.

[root@rac1 ~]# 
[root@rac1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.orcl.db    application    0/0    0/1    ONLINE    ONLINE    rac2        
ora....l1.inst application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....l2.inst application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1        
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2        
[root@rac1 ~]# 

五、终了。
至此,OCR与Votedisk,从OCFS2到RAW的变更操作已经完成。
————————————————
Ending。

说点什么

avatar

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据

  Subscribe  
提醒