这里将要操作的系统结构是这样的:Oracle 10g R2(10205)RAC 5 Node + RHEL 5.10 + OCFS2。
其中,打算新增的磁盘设备大小:1958.5 GB。

当前系统的环境如下所示:
节点一、五的状态:

[root@node1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LV07
                       39G   13G   25G  34% /
/dev/mapper/VolGroup00-LV01
                      7.8G  147M  7.3G   2% /home
/dev/mapper/VolGroup00-LV05
                       39G   27G   11G  72% /oracle
/dev/mapper/VolGroup00-LV02
                      7.8G  147M  7.3G   2% /tmp
/dev/mapper/VolGroup00-LV03
                       12G  4.0G  7.1G  36% /usr
/dev/mapper/VolGroup00-LV04
                      7.8G  3.2G  4.3G  43% /var
/dev/sda1             190M   37M  144M  21% /boot
tmpfs                 5.9G     0  5.9G   0% /dev/shm
/dev/sdb1             2.0T  2.0T  8.3G 100% /Data
/dev/sdc1             200G   22G  179G  11% /Arch
[root@node1 ~]# 
[root@node1 ~]# mount
/dev/mapper/VolGroup00-LV07 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/mapper/VolGroup00-LV01 on /home type ext3 (rw)
/dev/mapper/VolGroup00-LV05 on /oracle type ext3 (rw)
/dev/mapper/VolGroup00-LV02 on /tmp type ext3 (rw)
/dev/mapper/VolGroup00-LV03 on /usr type ext3 (rw)
/dev/mapper/VolGroup00-LV04 on /var type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
configfs on /sys/kernel/config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdb1 on /Data type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
/dev/sdc1 on /Arch type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
[root@node1 ~]# 

节点二、三、四的状态:

[root@node2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      239G   40G  187G  18% /
/dev/cciss/c0d0p1      99M   17M   78M  18% /boot
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdc1             1.8T  196M  1.7T   1% /Backup
/dev/sda1             2.0T  2.0T  8.3G 100% /Data
/dev/sdb1             200G   22G  179G  11% /Arch
[root@node2 ~]# 
[root@node2 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/cciss/c0d0p1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sdc1 on /Backup type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
configfs on /sys/kernel/config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sda1 on /Data type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
/dev/sdb1 on /Arch type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
[root@node2 ~]# 

可以看到,在节点一五上,打算新增的磁盘的盘符为:/dev/sdd1,而在节点二三四上,却是:/dev/sdc1。
这是需要注意的情况,正常情况下,各个节点的盘符应该是一致的,但不一致也是可以操作的。

看一下OCFS2的状态:

[root@node1 ~]# /etc/init.d/o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 61
  Network idle timeout: 30000
  Network keepalive delay: 3000
  Network reconnect delay: 3000
Checking O2CB heartbeat: Active
[root@node1 ~]# 

这个状态在各个节点必须一致。

在本文中,所涉及的操作是新增一个OCFS2的磁盘,并挂载,并不会涉及到操作包含Oracle RAC的OCFS盘。(尽量不涉及,因为上文成熟了在这个OCFS环境中,同一块共享盘在不同的节点上表现的盘符不同,所以不排除需要涉及到启停OCFS服务的操作,这就需要影响RAC的服务连续性了。)
但是为了防止意外,最好在开始接下来操作前,对数据库做好备份等操作。

设置DISPLAY环境变量:

[root@node1 ~]# export DISPLAY=1************:0.0
[root@node1 ~]# xhost +
access control disabled, clients can connect from any host
[root@node1 ~]# 

查看OCFS2的配置文件:cluster.conf的当前状态:

[root@node1 ~]# cat /etc/ocfs2/cluster.conf
node:
        ip_port = 7777
        ip_address = 18******1
        number = 0
        name = node1
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 18******2
        number = 1
        name = node2
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 18******3
        number = 2
        name = node3
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 18******4
        number = 3
        name = node4
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 18******5
        number = 4
        name = node5
        cluster = ocfs2

cluster:
        node_count = 5
        name = ocfs2

[root@node1 ~]# 

该文件的配置在各个节点中也应该是一样的。

创建未来OCFS2的新增共享磁盘要用到的挂载点:/data2

[root@wsscm01 ~]# mkdir /data2
[root@wsscm01 ~]# ll -d /Data
drwxrwxr-x 17 oracle dba 3896 Jul 27  2013 /Data
[root@wsscm01 ~]# 
[root@wsscm01 ~]# ll -d /data2
drwxr-xr-x 2 root root 4096 Jun 25 21:19 /data2
[root@wsscm01 ~]# 
[root@wsscm01 ~]# chown -R oracle.dba /data2
[root@wsscm01 ~]# 
[root@wsscm01 ~]# ll -d /data2
drwxr-xr-x 2 oracle dba 4096 Jun 25 21:19 /data2
[root@wsscm01 ~]# 

开启OCFS2CONSOLE工具:

[root@node1 ~]# ocfs2console &
[1] 23810
[root@node1 ~]# 

ocfs2console_1

ocfs2console_2

ocfs2console_3

ocfs2console_4
在这一步继续前,先看看当前新增共享盘的状态:

[root@wsscm01 ~]# strings /dev/sdd1 | head -n 13
lost+found
fmsg
contract
wscrmbak
A"fU
sgUB
k>U_
_BU_
TFU_
HJU_
1RU_
&VU_
wWU_
[root@wsscm01 ~]# strings /dev/sdc1 | head -n 13
this is an ocfs2 volume
this is an ocfs2 volume
this is an ocfs2 volume
this is an ocfs2 volume
OCFSV2
{neP
{neP
oradatafiles
GROUP01
GROUP01
INODE01
INODE01
{neP
[root@wsscm01 ~]# 

ocfs2console_5

ocfs2console_6

ocfs2console_7

ocfs2console_8

ocfs2console_9
这时候查看文件系统的状态:
节点一的状态:

[root@node1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LV07
                       39G   13G   25G  34% /
/dev/mapper/VolGroup00-LV01
                      7.8G  147M  7.3G   2% /home
/dev/mapper/VolGroup00-LV05
                       39G   27G   11G  72% /oracle
/dev/mapper/VolGroup00-LV02
                      7.8G  147M  7.3G   2% /tmp
/dev/mapper/VolGroup00-LV03
                       12G  4.0G  7.1G  36% /usr
/dev/mapper/VolGroup00-LV04
                      7.8G  3.2G  4.3G  43% /var
/dev/sda1             190M   37M  144M  21% /boot
tmpfs                 5.9G     0  5.9G   0% /dev/shm
/dev/sdb1             2.0T  2.0T  8.3G 100% /Data
/dev/sdc1             200G   22G  179G  11% /Arch
/dev/sdd1             1.8T  2.9G  1.8T   1% /data2
[root@node1 ~]# 

节点二三四五的状态:

[root@node2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      239G   40G  187G  18% /
/dev/cciss/c0d0p1      99M   17M   78M  18% /boot
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdc1             1.8T  196M  1.7T   1% /Backup
/dev/sda1             2.0T  2.0T  8.3G 100% /Data
/dev/sdb1             200G   22G  179G  11% /Arch
[root@node2 ~]# 

从上面的状态可以看到,当前只有节点一应用了OCFS2CONSOLE的配置变更,其他四个节点都没有感受到配置的变更。
看下面的命令反馈,你会更明显的看到节点一五与节点二三四的差别:

[root@wsscm01 data2]# mounted.ocfs2 -d
Device                FS     Stack  UUID                              Label
/dev/sdb1             ocfs2  o2cb   7794FA513A2547C7AA69C6D0055E58A7  oradatafiles
/dev/sdc1             ocfs2  o2cb   B68172211A124E82B3DEF12280907B38  oradatafiles
/dev/sdd1             ocfs2  o2cb   B2B93781687E4E0DB6CF58DCF4F14BFA  oracle
[root@wsscm01 data2]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdb1             ocfs2  wsscm03, wsscm02, wsscm05, wsscm04, wsscm01
/dev/sdc1             ocfs2  wsscm03, wsscm02, wsscm05, wsscm04, wsscm01
/dev/sdd1             ocfs2  wsscm01, wsscm05
[root@wsscm01 data2]# 

对于节点五

[root@wsscm05 /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LV06
                       16G  3.6G   12G  24% /
/dev/mapper/VolGroup00-LV02
                      3.9G  137M  3.6G   4% /tmp
/dev/mapper/VolGroup00-LV04
                       20G  8.7G  9.8G  48% /oracle
/dev/mapper/VolGroup00-LV01
                      7.8G  147M  7.3G   2% /home
/dev/mapper/VolGroup00-LV03
                       12G  4.0G  7.1G  36% /usr
/dev/sda1             190M   17M  164M  10% /boot
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sdb1             2.0T  2.0T  8.3G 100% /Data
/dev/sdc1             200G   22G  179G  11% /Arch
[root@wsscm05 /]# 
[root@wsscm05 /]# mount -t ocfs2 /dev/sdd1 /data2
[root@wsscm05 /]# 
[root@wsscm05 /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LV06
                       16G  3.6G   12G  24% /
/dev/mapper/VolGroup00-LV02
                      3.9G  137M  3.6G   4% /tmp
/dev/mapper/VolGroup00-LV04
                       20G  8.7G  9.8G  48% /oracle
/dev/mapper/VolGroup00-LV01
                      7.8G  147M  7.3G   2% /home
/dev/mapper/VolGroup00-LV03
                       12G  4.0G  7.1G  36% /usr
/dev/sda1             190M   17M  164M  10% /boot
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sdb1             2.0T  2.0T  8.3G 100% /Data
/dev/sdc1             200G   22G  179G  11% /Arch
/dev/sdd1             1.8T  2.9G  1.8T   1% /data2
[root@wsscm05 /]# 
[root@wsscm05 /]# mount
/dev/mapper/VolGroup00-LV06 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/mapper/VolGroup00-LV02 on /tmp type ext3 (rw)
/dev/mapper/VolGroup00-LV04 on /oracle type ext3 (rw)
/dev/mapper/VolGroup00-LV01 on /home type ext3 (rw)
/dev/mapper/VolGroup00-LV03 on /usr type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
configfs on /sys/kernel/config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdb1 on /Data type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
/dev/sdc1 on /Arch type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
/dev/sdd1 on /data2 type ocfs2 (rw,_netdev,heartbeat=local)
[root@wsscm05 /]# 

对于节点二三四
1.停掉RAC:

[root@wsscm02 ~]# su - oracle
oracle@wsscm02: /home/oracle> crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.wsscm.db   application    0/0    0/1    ONLINE    ONLINE    wsscm04     
ora....m1.inst application    0/5    0/0    ONLINE    ONLINE    wsscm04     
ora....m2.inst application    0/5    0/0    ONLINE    ONLINE    wsscm02     
ora....m3.inst application    0/5    0/0    ONLINE    ONLINE    wsscm01     
ora....m4.inst application    0/5    0/0    ONLINE    ONLINE    wsscm03     
ora....m5.inst application    2/5    0/0    ONLINE    ONLINE    wsscm05     
ora....01.lsnr application    0/5    0/0    ONLINE    ONLINE    wsscm01     
ora....m01.gsd application    0/5    0/0    ONLINE    ONLINE    wsscm01     
ora....m01.ons application    0/3    0/0    ONLINE    ONLINE    wsscm01     
ora....m01.vip application    0/0    0/0    ONLINE    ONLINE    wsscm01     
ora....02.lsnr application    0/5    0/0    ONLINE    ONLINE    wsscm02     
ora....m02.gsd application    0/5    0/0    ONLINE    ONLINE    wsscm02     
ora....m02.ons application    0/3    0/0    ONLINE    ONLINE    wsscm02     
ora....m02.vip application    0/0    0/0    ONLINE    ONLINE    wsscm02     
ora....03.lsnr application    0/5    0/0    ONLINE    ONLINE    wsscm03     
ora....m03.gsd application    0/5    0/0    ONLINE    ONLINE    wsscm03     
ora....m03.ons application    0/3    0/0    ONLINE    ONLINE    wsscm03     
ora....m03.vip application    0/0    0/0    ONLINE    ONLINE    wsscm03     
ora....04.lsnr application    0/5    0/0    ONLINE    ONLINE    wsscm04     
ora....m04.gsd application    0/5    0/0    ONLINE    ONLINE    wsscm04     
ora....m04.ons application    0/3    0/0    ONLINE    ONLINE    wsscm04     
ora....m04.vip application    0/0    0/0    ONLINE    ONLINE    wsscm04     
ora....05.lsnr application    0/5    0/0    ONLINE    ONLINE    wsscm05     
ora....m05.gsd application    0/5    0/0    ONLINE    ONLINE    wsscm05     
ora....m05.ons application    0/3    0/0    ONLINE    ONLINE    wsscm05     
ora....m05.vip application    0/0    0/0    ONLINE    ONLINE    wsscm05     
oracle@wsscm02: /home/oracle> 
oracle@wsscm02: /home/oracle> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Thu Jun 25 22:06:59 2015

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
wsscm2           OPEN

SQL> show parameter db_name;    

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      wsscm
SQL> 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> 
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
oracle@wsscm02: /home/oracle> 
oracle@wsscm02: /home/oracle> env | grep HOME
ORA_CRS_HOME=/oracle/crs
HOME=/home/oracle
ORACLE_HOME=/oracle/product/10.2.0
oracle@wsscm02: /home/oracle> exit
logout

[root@wsscm02 ~]# 
[root@wsscm02 ~]# ps -ef | grep ora_
root     14460 29400  0 22:10 pts/1    00:00:00 grep ora_
[root@wsscm02 ~]# ps -ef | grep d.bin
oracle    8475  8470  0 Jun23 ?        00:00:03 /oracle/crs/bin/evmd.bin
root      8561  7741  0 Jun23 ?        00:01:49 /oracle/crs/bin/crsd.bin reboot
root      9067  8570  0 Jun23 ?        00:00:00 /oracle/crs/bin/oprocd.bin run -t 1000 -m 500 -f
oracle    9120  8646  0 Jun23 ?        00:04:25 /oracle/crs/bin/ocssd.bin
root     14482 29400  0 22:10 pts/1    00:00:00 grep d.bin
[root@wsscm02 ~]# 
[root@wsscm02 ~]# /oracle/crs/bin/crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@wsscm02 ~]# 
[root@wsscm02 ~]# ps -ef | grep d.bin
root     15032 29400  0 22:10 pts/1    00:00:00 grep d.bin
[root@wsscm02 ~]# 
[root@wsscm02 ~]# 

2.停掉OCFS2服务:

[root@wsscm02 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      239G   40G  187G  18% /
/dev/cciss/c0d0p1      99M   17M   78M  18% /boot
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda1             2.0T  2.0T  8.3G 100% /Data
/dev/sdb1             200G   22G  179G  11% /Arch
[root@wsscm02 ~]# 
[root@wsscm02 ~]# umount /dev/sda1
[root@wsscm02 ~]# 
[root@wsscm02 ~]# umount /dev/sdb1
[root@wsscm02 ~]#
[root@wsscm02 ~]# /etc/init.d/o2cb unload
Stopping O2CB cluster ocfs2: OK
Unloading module "ocfs2": OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
[root@wsscm02 ~]# 
[root@wsscm02 ~]# /etc/init.d/o2cb stop
[root@wsscm02 ~]#
[root@wsscm02 ~]# /etc/init.d/o2cb status
Driver for "configfs": Not loaded
Driver for "ocfs2_dlmfs": Not loaded
Checking O2CB cluster ocfs2: Offline
[root@wsscm02 ~]#

重新启动OCFS:

[root@wsscm02 ~]# /etc/init.d/o2cb status
Driver for "configfs": Not loaded
Driver for "ocfs2_dlmfs": Not loaded
Checking O2CB cluster ocfs2: Offline
[root@wsscm02 ~]# 
[root@wsscm02 ~]# /etc/init.d/o2cb start
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
[root@wsscm02 ~]# 

挂载新的共享磁盘:

[root@wsscm02 ~]# mounted.ocfs2 -d
Device                FS     Stack  UUID                              Label
/dev/sda1             ocfs2  o2cb   7794FA513A2547C7AA69C6D0055E58A7  oradatafiles
/dev/sdb1             ocfs2  o2cb   B68172211A124E82B3DEF12280907B38  oradatafiles
/dev/sdc1             ocfs2  o2cb   B2B93781687E4E0DB6CF58DCF4F14BFA  oracle
[root@wsscm02 ~]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sda1             ocfs2  wsscm03, wsscm05, wsscm04, wsscm01
/dev/sdb1             ocfs2  wsscm03, wsscm05, wsscm04, wsscm01
/dev/sdc1             ocfs2  wsscm01, wsscm05
[root@wsscm02 ~]# 
[root@wsscm02 ~]# mount -t ocfs2 -o datavolume,nointr /dev/sdc1 /data2
[root@wsscm02 ~]# 
[root@wsscm02 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      239G   40G  187G  18% /
/dev/cciss/c0d0p1      99M   17M   78M  18% /boot
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdc1             1.8T  2.9G  1.8T   1% /data2
[root@wsscm02 ~]# 
[root@wsscm02 ~]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sda1             ocfs2  wsscm03, wsscm05, wsscm04, wsscm01
/dev/sdb1             ocfs2  wsscm03, wsscm05, wsscm04, wsscm01
/dev/sdc1             ocfs2  wsscm01, wsscm05, wsscm02
[root@wsscm02 ~]# 

以上是在节点二的操作,节点三、四,如法炮制。

最后,全部挂载成功:

[root@wsscm04 ~]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sda1             ocfs2  wsscm05, wsscm01
/dev/sdb1             ocfs2  wsscm05, wsscm01
/dev/sdc1             ocfs2  wsscm01, wsscm05, wsscm02, wsscm03, wsscm04
[root@wsscm04 ~]# 

最后,把之前卸掉的OCFS2的盘重新挂载起来,然后恢复Oracle 10g的RAC:
启动10g RAC:/etc/init.d/init.crs start

注:
在文首的几步做OCFS2CONSOLE的Format的时候,需要确认你操作的盘真的没有人挂载或者正在使用,否则Format确认,那么上面的数据会全部丢失,找都找不回来。(或者说,费很大力气才能找回来)
如果工作交接不到位,这是很容易出娄子的点!!
——————————————————————————-
Done。

1
说点什么

avatar
1 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
1 Comment authors
Star Guo Recent comment authors

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据

  Subscribe  
提醒
Star Guo
游客
Star Guo

OCFS2中国社区 QQ群 477962542,谢谢加入讨论OCFS2。