Oracle RAC 10g(ocfs2):增加节点
如题所示,本文将呈现如何在基于OCFS2搭建的Oracle RAC10g上增加新的集群节点。
开始本文的前提是:
1.部署完成的,基于OCFS2搭建的ORACLE RAC 10G系统(在本环境中,它为双节点RAC)
2.安装好的Linux主机,用于新增的节点。
环境说明:
原RAC环境中的节点配置如下:
节点服务器:
操作系统:Oracle Enterprise Linux 5.3
防火墙:关闭、开机禁用
SELINUX:禁用
用于Oracle的操作系统用户:oracle
RAC集群件软件的安装路径:
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/10g/grid
RAC数据库软件的安装路径:
ORACLE_BASE=/u01/app/oracle
节点一:
主机名:rac1
节点二:
主机名:rac2
节点三(新增节点):
主机名:rac3
IP地址分配:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[oracle@rac3 ~]$ cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. # Localhost 127.0.0.1 localhost # Public 192.168.56.11 rac1 192.168.56.12 rac2 192.168.56.13 rac3 # Private 192.168.10.11 rac1-priv 192.168.10.12 rac2-priv 192.168.10.13 rac3-priv # VIP 192.168.56.21 rac1-vip 192.168.56.22 rac2-vip 192.168.56.23 rac3-vip [oracle@rac3 ~]$ |
注意:
1.新增节点需要和现有RAC集群中的节点的主机配置一致,即安装ORACLE软件前的预配置一致。
2.新增节点需要和现有RAC集群中的节点之间配置到彼此的SSH等价关系。
3.新增节点需要与现有RAC集群中的节点保持时间的同步性。
4.新增节点操作的过程中,需要保证现有RAC集群运行状态的正常。
5.新增节点需要和现有RAC集群中的节点的软件安装目录一致,即CLUSTERWARE与DATABASE的目录与现有集群的节点一致。
一、新增节点操作系统的环境准备(节点三的预配置)
网络相关的配置:
主机名、网卡状态/配置、防火墙状态、SELinux状态/配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
[root@rac3 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:F5:0F:B5 inet addr:192.168.56.13 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fef5:fb5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2720 errors:0 dropped:0 overruns:0 frame:0 TX packets:4415 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:704644 (688.1 KiB) TX bytes:415592 (405.8 KiB) Base address:0x2000 Memory:fd5c0000-fd5e0000 eth1 Link encap:Ethernet HWaddr 00:0C:29:F5:0F:BF inet addr:192.168.10.13 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fef5:fbf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75 errors:0 dropped:0 overruns:0 frame:0 TX packets:43 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7916 (7.7 KiB) TX bytes:7730 (7.5 KiB) Base address:0x2040 Memory:fd5a0000-fd5c0000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:5973 errors:0 dropped:0 overruns:0 frame:0 TX packets:5973 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8188450 (7.8 MiB) TX bytes:8188450 (7.8 MiB) [root@rac3 ~]# [root@rac3 ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=rac3 GATEWAY=192.168.56.1 [root@rac3 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.56.255 HWADDR=00:0C:29:F5:0F:B5 IPADDR=192.168.56.13 NETMASK=255.255.255.0 NETWORK=192.168.56.0 ONBOOT=yes [root@rac3 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) DEVICE=eth1 BOOTPROTO=static BROADCAST=192.168.10.255 HWADDR=00:0C:29:F5:0F:BF IPADDR=192.168.10.13 NETMASK=255.255.255.0 NETWORK=192.168.10.0 ONBOOT=yes [root@rac3 ~]# [root@rac3 ~]# service iptables status Firewall is stopped. [root@rac3 ~]# sestatus SELinux status: disabled [root@rac3 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted [root@rac3 ~]# |
Linux系统参数的修改:
对sysctl.conf、limits.conf、login的修改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
[root@rac3 ~]# cat /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144 [root@rac3 ~]# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144 [root@rac3 ~]# [root@rac3 ~]# cat /etc/security/limits.conf # /etc/security/limits.conf # #Each line describes a limit for a user in the form: # #<domain> <type> <item> <value> # #Where: #<domain> can be: # - an user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # #<type> can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # #<item> can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open files # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to # - rtprio - max realtime priority # #<domain> <type> <item> <value> # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file * soft nproc 2047 * hard nproc 16384 * soft nofile 1024 * hard nofile 65536 [root@rac3 ~]# [root@rac3 ~]# cat /etc/pam.d/login #%PAM-1.0 auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so auth include system-auth account required pam_nologin.so account include system-auth password include system-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session include system-auth session required pam_loginuid.so session optional pam_console.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open session optional pam_keyinit.so force revoke session required /lib/security/pam_limits.so session required pam_limits.so [root@rac3 ~]# [root@rac3 ~]# |
Linux内核模块的加载:
加载模块hangcheck-timer。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[root@rac3 ~]# cat /etc/modprobe.conf alias eth0 e1000 alias eth1 e1000 alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptspi alias scsi_hostadapter2 ata_piix options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 [root@rac3 ~]# [root@rac3 ~]# cat /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local modprobe hangcheck-timer [root@rac3 ~]# [root@rac3 ~]# sh /etc/rc.local [root@rac3 ~]# [root@rac3 ~]# lsmod | grep --color hangcheck hangcheck_timer 37465 0 [root@rac3 ~]# |
光盘介质的挂载:
fstab、mount -a
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@rac3 ~]# cat /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/hdc /iso iso9660 defaults 0 0 [root@rac3 ~]# mount -a [root@rac3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 18G 2.8G 14G 17% / /dev/sda1 99M 13M 82M 13% /boot tmpfs 501M 0 501M 0% /dev/shm /dev/hdc 3.3G 3.3G 0 100% /iso [root@rac3 ~]# |
YUM的状态:
*.repo、yum repolist
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[root@rac3 ~]# cat /etc/yum.repos.d/iso.repo [Server] name=Server baseurl=file:///iso/Server gpgcheck=0 [VT] name=VT baseurl=file:///iso/VT gpgcheck=0 [Cluster] name=Cluster baseurl=file:///iso/Cluster gpgcheck=0 [ClusterStorage] name=ClusterStorage baseurl=file:///iso/ClusterStorage gpgcheck=0 [root@rac3 ~]# [root@rac3 ~]# yum repolist Loaded plugins: security repo id repo name status Cluster Cluster enabled : 9 ClusterStorage ClusterStorage enabled : 13 Server Server enabled : 2,969 VT VT enabled : 17 repolist: 3,008 [root@rac3 ~]# |
软件包的安装:
for Oracle
yum install -y binutils* compat-db setarch compat-libstdc++-* control-center elfutils-libelf-* elfutils-libelf-devel-* elfutils-libelf-devel-static-* gcc-* gcc-c++-* glibc-* glibc-common-* glibc-devel-* glibc-headers-* ksh libaio-* libaio-devel-* libgcc libstdc++-* libstdc++-devel make sysstat unixODBC unixODBC-devel compat-libstdc++-33.i386 compat-libstdc++-296.i386 glibc.i686 glibc-devel.i386 unixODBC.i386 unixODBC-devel.i386 libaio.i386 libaio-devel.i386 libgcc.i386 libstdc++.i386 libstdc++-devel.i386 libaio-devel.i386 libXp.i386
for Oracle ASMLib
yum install -y oracleasm
for Oracle OCFS2
yum install -y ocfs2console.x86_64 ocfs2-tools.x86_64 ocfs2-tools-devel.x86_64 ocfs2-2.6.18-128.el5.x86_64
for iSCSI
yum install -y iscsi*
用户和组的创建:
groupadd oinstall
groupadd dba
useradd -g oinstall -G dba oracle
用户状态:
1 2 3 |
[root@rac3 ~]# id oracle uid=1200(oracle) gid=1000(oinstall) groups=1000(oinstall),2000(dba),4000(asmdba),6000(oper) [root@rac3 ~]# |
用户环境变量:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
[root@rac3 ~]# cat /home/oracle/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH #=================================================== # Oracle Database 10g RAC Environment Variable #=================================================== # Clusterware export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/10g/grid\ # ASM_SID export ORACLE_SID=+ASM1 # Database #export ORACLE_BASE=/u01/app/oracle #export ORACLE_HOME=$ORACLE_BASE/product/10g/dbhome_1 # DATABASE_SID #export ORACLE_SID=dblogic1 #export ORACLE_UNQNAME=dblogic # Software and Library Path export JAVA_HOME=/usr/local/java export ORACLE_PATH=/u01/app/common/oracle/sql export ORACLE_TERM=xterm export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS" export TNS_ADMIN=$ORACLE_HOME/network/admin export ORA_NLS11=$ORACLE_HOME/nls/data export PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/u01/app/common/oracle/bin export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export THREADS_FLAG=native export TEMP=/tmp export TMPDIR=/tmp [root@rac3 ~]# |
创建Oracle RAC软件目录结构:
mkdir -p /u01/app/{grid,10g,oracle}
mkdir /u01/app/10g/grid
分配权限:
chown -R oracle.oinstall /u01
创建完成后的目录结构:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@rac3 ~]# tree /u01/ /u01/ `-- app |-- 10g | `-- grid |-- grid `-- oracle `-- product `-- 10g `-- dbhome_1 8 directories, 0 files [root@rac3 ~]# |
创建Oracle OCFS2挂载点:
mkdir /ocfs
时间同步:
由于本环境是在VMWARE中进行的,所以时间同步的部分仅需要安装:VMware Toolbox即可。
安装完成后,启用时间同步即可:
1 2 3 4 5 |
[root@rac3 ~]# vmware-toolbox-cmd timesync enable Enabled [root@rac3 ~]# vmware-toolbox-cmd timesync status Enabled [root@rac3 ~]# |
SSH等价性:
关于等价性的配置,在RHEL、OEL中,主要的命令如下两条:
ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub {rac1,ra1-priv,rac2,rac2-priv}
它们的具体配置,请参考以下RAC安装文档中的SSH部分:http://d-prototype.com/archives/881
达成的效果:
节点一:
1 2 3 4 |
[root@rac1 ~]# su - oracle [oracle@rac1 ~]$ ssh rac3 date Thu Sep 18 21:20:38 CST 2014 [oracle@rac1 ~]$ |
节点二:
1 2 3 4 |
[root@rac2 ~]# su - oracle [oracle@rac2 ~]$ ssh rac3 date Thu Sep 18 21:20:38 CST 2014 [oracle@rac2 ~]$ |
节点三:
1 2 3 4 5 6 |
[root@rac3 ~]# su - oracle [oracle@rac3 ~]$ ssh rac1 date Thu Sep 18 22:10:14 CST 2014 [oracle@rac3 ~]$ ssh rac2 date Thu Sep 18 22:10:17 CST 2014 [oracle@rac3 ~]$ |
共享存储资源的获得:
主要命令:
1. iscsiadm -m discovery -t sendtargets -p 192.168.56.130
2. iscsiadm -m node -T
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[root@rac3 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.56.130 192.168.56.130:3260,1 ocfs 192.168.56.130:3260,1 ora10gdata01 [root@rac3 ~]# [root@rac3 ~]# fdisk -l | grep --color "Disk " Disk /dev/sda: 21.4 GB, 21474836480 bytes Disk /dev/sdc: 10.2 GB, 10234101760 bytes Disk /dev/sdb: 1006 MB, 1006632960 bytes [root@rac3 ~]# fdisk -l /dev/sdc Disk /dev/sdc: 10.2 GB, 10234101760 bytes 64 heads, 32 sectors/track, 9760 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 9760 9994224 83 Linux [root@rac3 ~]# fdisk -l /dev/sdb Disk /dev/sdb: 1006 MB, 1006632960 bytes 31 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 1922 * 512 = 984064 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 1022 982111 83 Linux [root@rac3 ~]# |
/dev/sdb,OCFS2
/dev/sdc,共享数据盘
ORACLE ASMLib部分:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
[root@rac3 ~]# rpm -qa | grep oracleasm oracleasm-2.6.18-128.el5-2.0.5-1.el5 oracleasm-support-2.1.2-1.el5 [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/oracleasm Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status} [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/oracleasm status Checking if ASM is loaded: no Checking if /dev/oracleasm is mounted: no [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: done Dropping Oracle ASMLib disks: [ OK ] Shutting down the Oracle ASMLib driver: [ OK ] [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/oracleasm status Checking if ASM is loaded: no Checking if /dev/oracleasm is mounted: no [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/oracleasm enable Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes [root@rac3 ~]# [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@rac3 ~]# /etc/init.d/oracleasm listdisks DATA01 [root@rac3 ~]# [root@rac3 ~]# ll /dev/oracleasm/disks/ total 0 brw-rw---- 1 oracle oinstall 8, 33 Sep 18 22:16 DATA01 [root@rac3 ~]# |
这样,节点三的操作系统的预配置的第一部分就完成了。
二、将节点三加入到已有的OCFS2
(OCFS2的安装与基本配置,参考文档:http://d-prototype.com/archives/1000)
执行这一阶段的前提是:
1.时间同步
2.正常的访问共享存储
在原有OCFS2的集群中,添加新增节点的说明。
为了给OCFS2中添加节点三,这里有两种方法选择。
1)图形化:OCFS2Console
现在原有的OCFS2的任意一个节点上操作:
调用OCFS2CONSOLE
1 2 3 4 5 6 7 |
[root@rac1 ~]# export DISPLAY=192.168.56.1:0.0 [root@rac1 ~]# xhost + access control disabled, clients can connect from any host [root@rac1 ~]# [root@rac1 ~]# ocfs2console & [1] 27318 [root@rac1 ~]# |
上述操作正确完成的时候,OCFS2的配置文件会在OCFS2的各个节点上被变更:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[root@rac1 ~]# cat /etc/ocfs2/cluster.conf node: ip_port = 7777 ip_address = 192.168.56.11 number = 0 name = rac1 cluster = ocfs2 node: ip_port = 7777 ip_address = 192.168.56.12 number = 1 name = rac2 cluster = ocfs2 node: ip_port = 7777 ip_address = 192.168.56.13 number = 2 name = rac3 cluster = ocfs2 cluster: node_count = 3 name = ocfs2 [root@rac1 ~]# |
2)命令行:o2cb_ctl
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
[root@oel1 ~]# cat /etc/ocfs2/cluster.conf node: ip_port = 7777 ip_address = 192.168.119.131 number = 0 name = oel1 cluster = ocfs2 node: ip_port = 7777 ip_address = 192.168.119.132 number = 1 name = oel2 cluster = ocfs2 cluster: node_count = 2 name = ocfs2 [root@oel1 ~]# [root@oel1 ~]# o2cb_ctl -C -i -n oel3 -t node -a number=2 -a ip_address=192.168.119.133 -a ip_port=7777 -a cluster=ocfs2 Node oel3 created [root@oel1 ~]# cat /etc/ocfs2/cluster.conf node: ip_port = 7777 ip_address = 192.168.119.131 number = 0 name = oel1 cluster = ocfs2 node: ip_port = 7777 ip_address = 192.168.119.132 number = 1 name = oel2 cluster = ocfs2 node: ip_port = 7777 ip_address = 192.168.119.133 number = 2 name = oel3 cluster = ocfs2 cluster: node_count = 3 name = ocfs2 [root@oel1 ~]# scp /etc/ocfs2/cluster.conf oel2:/etc/ocfs2/ root@oel2's password: cluster.conf 100% 325 0.3KB/s 00:00 [root@oel1 ~]# [root@oel1 ~]# scp /etc/ocfs2/cluster.conf oel3:/etc/ocfs2/ root@oel3's password: cluster.conf 100% 325 0.3KB/s 00:00 [root@oel1 ~]# |
—— —— —— —— —— ——
配置O2CB。
当前已存在的集群的节点上的O2CB的状态:
(这里,选择节点一查看)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[root@rac1 ~]# /etc/init.d/o2cb status Module "configfs": Loaded Filesystem "configfs": Mounted Module "ocfs2_nodemanager": Loaded Module "ocfs2_dlm": Loaded Module "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Online Heartbeat dead threshold: 61 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 3000 Checking O2CB heartbeat: Active [root@rac1 ~]# |
按照上面已有节点的O2CB状态,配置节点三的O2CB:
(节点三)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
[root@rac3 ~]# /etc/init.d/o2cb status Module "configfs": Loaded Filesystem "configfs": Mounted Module "ocfs2_nodemanager": Loaded Module "ocfs2_dlm": Loaded Module "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Offline [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/o2cb unload Unmounting ocfs2_dlmfs filesystem: OK Unloading module "ocfs2_dlmfs": OK Unmounting configfs filesystem: OK Unloading module "configfs": OK [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/o2cb configure Configuring the O2CB driver. This will configure the on-boot properties of the O2CB driver. The following questions will determine whether the driver is loaded on boot. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Load O2CB driver on boot (y/n) [y]: Cluster to start on boot (Enter "none" to clear) [ocfs2]: Specify heartbeat dead threshold (>=7) [61]: Specify network idle timeout in ms (>=5000) [30000]: Specify network keepalive delay in ms (>=1000) [2000]: Specify network reconnect delay in ms (>=2000) [3000]: Writing O2CB configuration: OK Loading module "configfs": OK Mounting configfs filesystem at /sys/kernel/config: OK Loading module "ocfs2_nodemanager": OK Loading module "ocfs2_dlm": OK Loading module "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Starting O2CB cluster ocfs2: OK [root@rac3 ~]# [root@rac3 ~]# /etc/init.d/o2cb status Module "configfs": Loaded Filesystem "configfs": Mounted Module "ocfs2_nodemanager": Loaded Module "ocfs2_dlm": Loaded Module "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Online Heartbeat dead threshold: 61 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 3000 Checking O2CB heartbeat: Not active [root@rac3 ~]# |
节点三挂载OCFS2
1 2 |
[root@oel3 ~]# mount.ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs2/ [root@oel3 ~]# |
(在挂载的时候,可能出现的问题是“mount.ocfs2: Transport endpoint is not connected while mounting …”,关于该问题的说明,参考文档:http://d-prototype.com/archives/1049)
三、Clusterware:增加节点三
当前状态说明)
已有RAC:
节点一、节点二
新增节点:
节点三
RAC的当前状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[root@oel1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.oel1.gsd application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.ons application 0/3 0/0 ONLINE ONLINE oel1 ora.oel1.vip application 0/0 0/0 ONLINE ONLINE oel1 ora.oel2.gsd application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.ons application 0/3 0/0 ONLINE ONLINE oel2 ora.oel2.vip application 0/0 0/0 ONLINE ONLINE oel2 [root@oel1 ~]# [root@oel1 ~]# [root@oel1 ~]# /u01/app/10g/grid/bin/crsctl query crs softwareversion CRS software version on node [oel1] is [10.2.0.5.0] [root@oel1 ~]# [root@oel1 ~]# tree /ocfs2/ /ocfs2/ |-- me |-- ocr |-- ocr-mirror1 |-- votedisk |-- votedisk-mirror1 `-- votedisk-mirror2 0 directories, 6 files [root@oel1 ~]# |
调用addNode.sh新增节点。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
[root@oel1 ~]# su - oracle [oracle@oel1 ~]$ cd $ORA_CRS_HOME [oracle@oel1 grid]$ pwd /u01/app/10g/grid [oracle@oel1 grid]$ cd oui/bin [oracle@oel1 bin]$ ll total 268 -rwxrwxr-x 1 oracle oinstall 167 Sep 18 17:28 addLangs.sh -rwxrwxr-x 1 oracle oinstall 80 Sep 18 17:28 addNode.sh -rwxrwxr-x 1 oracle oinstall 275 Sep 18 17:28 attachHome.sh -rwxrwxr-x 1 oracle oinstall 182 Sep 18 17:28 detachHome.sh -rwxrwxr-x 1 oracle oinstall 51104 Apr 20 2010 lsnodes -rwxr-xr-x 1 oracle oinstall 335 Jul 18 2007 ouica.bat -rwxr-xr-x 1 oracle oinstall 397 Sep 18 17:28 ouica.sh drwxrwx--- 2 oracle oinstall 4096 Sep 18 16:45 resource -rwxrwxr-x 1 oracle oinstall 1029 Sep 18 17:28 runConfig.sh -rwxrwxr-x 1 oracle oinstall 175027 Apr 9 2010 runInstaller -rwxrwxr-x 1 oracle oinstall 57 Sep 18 17:28 runInstaller.sh [oracle@oel1 bin]$ [oracle@oel1 bin]$ export DISPLAY=192.168.119.1:0.0 [oracle@oel1 bin]$ xhost + access control disabled, clients can connect from any host [oracle@oel1 bin]$ [oracle@oel1 bin]$ ./addNode.sh & [1] 13092 [oracle@oel1 bin]$ Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Oracle Universal Installer, Version 10.2.0.5.0 Production Copyright (C) 1999, 2010, Oracle. All rights reserved. [oracle@oel1 bin]$ |
1 2 3 4 5 |
[root@oel3 ~]# /u01/app/grid/oraInventory/orainstRoot.sh Changing permissions of /u01/app/grid/oraInventory to 770. Changing groupname of /u01/app/grid/oraInventory to oinstall. The execution of the script is complete [root@oel3 ~]# |
节点一:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[oracle@oel1 ~]$ su - root Password: [root@oel1 ~]# [root@oel1 ~]# /u01/app/10g/grid/install/rootaddnode.sh clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Attempting to add 1 new nodes to the configuration Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 3: oel3 oel3-priv oel3 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. /u01/app/10g/grid/bin/srvctl add nodeapps -n oel3 -A oel3-vip/255.255.255.0/eth0 -o /u01/app/10g/grid [root@oel1 ~]# |
节点三:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
[root@oel3 ~]# /u01/app/10g/grid/root.sh WARNING: directory '/u01/app/10g' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. OCR LOCATIONS = /ocfs2/ocr,/ocfs2/ocr-mirror1 OCR backup directory '/u01/app/10g/grid/cdata/crs' does not exist. Creating now Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/10g' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: oel1 oel1-priv oel1 node 2: oel2 oel2-priv oel2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. oel1 oel2 oel3 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) [root@oel3 ~]# |
脚本执行完后,RAC集群的状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@oel3 ~]# /u01/app/10g/grid/bin/crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....L1.inst application 0/5 0/0 ONLINE ONLINE oel1 ora....L2.inst application 0/5 0/0 ONLINE ONLINE oel2 ora.ORCL.db application 0/0 0/1 ONLINE ONLINE oel3 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE oel1 ora....L1.lsnr application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.gsd application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.ons application 0/3 0/0 ONLINE ONLINE oel1 ora.oel1.vip application 0/0 0/0 ONLINE ONLINE oel1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE oel2 ora....L2.lsnr application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.gsd application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.ons application 0/3 0/0 ONLINE ONLINE oel2 ora.oel2.vip application 0/0 0/0 ONLINE ONLINE oel2 ora.oel3.gsd application 0/5 0/0 ONLINE ONLINE oel3 ora.oel3.ons application 0/3 0/0 ONLINE ONLINE oel3 ora.oel3.vip application 0/0 0/0 ONLINE ONLINE oel3 [root@oel3 ~]# |
四、Database:节点三上数据库软件的安装
在已有的RAC集群的任意节点上发起“addNode.sh”:
这里选择节点一,以oracle用于登入:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
[root@oel1 ~]# su - oracle [oracle@oel1 ~]$ cd $ORACLE_HOME [oracle@oel1 dbhome_1]$ pwd /u01/app/oracle/product/10g/dbhome_1 [oracle@oel1 dbhome_1]$ cd oui/bin/ [oracle@oel1 bin]$ ll total 268 -rwxr-xr-x 1 oracle oinstall 186 Sep 19 13:48 addLangs.sh -rwxr-xr-x 1 oracle oinstall 99 Sep 19 13:48 addNode.sh -rwxr-xr-x 1 oracle oinstall 294 Sep 19 13:48 attachHome.sh -rwxr-xr-x 1 oracle oinstall 201 Sep 19 13:48 detachHome.sh -rwxr-xr-x 1 oracle oinstall 51104 Apr 20 2010 lsnodes -rwxr-xr-x 1 oracle oinstall 335 Jul 18 2007 ouica.bat -rwxr-xr-x 1 oracle oinstall 530 Sep 19 13:48 ouica.sh drwxr-x--- 2 oracle oinstall 4096 Sep 19 13:35 resource -rwxr-xr-x 1 oracle oinstall 1219 Sep 19 13:48 runConfig.sh -rwxr-xr-x 1 oracle oinstall 175027 Apr 9 2010 runInstaller -rwxr-xr-x 1 oracle oinstall 76 Sep 19 13:48 runInstaller.sh [oracle@oel1 bin]$ [oracle@oel1 bin]$ export DISPLAY=192.168.119.1:0.0 [oracle@oel1 bin]$ xhost + access control disabled, clients can connect from any host [oracle@oel1 bin]$ [oracle@oel1 bin]$ ./addNode.sh & [1] 434 [oracle@oel1 bin]$ Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Oracle Universal Installer, Version 10.2.0.5.0 Production Copyright (C) 1999, 2010, Oracle. All rights reserved. [oracle@oel1 bin]$ |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@oel3 ~]# /u01/app/oracle/product/10g/dbhome_1/root.sh Running Oracle 10g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/10g/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. [root@oel3 ~]# |
这样,新的节点上的数据库的安装就完成了。
这时候的集群资源状态没有变化,如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@oel3 ~]# /u01/app/10g/grid/bin/crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....L1.inst application 0/5 0/0 ONLINE ONLINE oel1 ora....L2.inst application 0/5 0/0 ONLINE ONLINE oel2 ora.ORCL.db application 0/0 0/1 ONLINE ONLINE oel3 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE oel1 ora....L1.lsnr application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.gsd application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.ons application 0/3 0/0 ONLINE ONLINE oel1 ora.oel1.vip application 0/0 0/0 ONLINE ONLINE oel1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE oel2 ora....L2.lsnr application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.gsd application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.ons application 0/3 0/0 ONLINE ONLINE oel2 ora.oel2.vip application 0/0 0/0 ONLINE ONLINE oel2 ora.oel3.gsd application 0/5 0/0 ONLINE ONLINE oel3 ora.oel3.ons application 0/3 0/0 ONLINE ONLINE oel3 ora.oel3.vip application 0/0 0/0 ONLINE ONLINE oel3 [root@oel3 ~]# |
五、关于监听器
(NETCA的具体操作,图形化“netca”下一步即可,故不做更多的描述。)
1.在每个节点(当前三节点)上执行NETCA删掉监听器:LISTENER
节点一:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@oel1 ~]# su - oracle [oracle@oel1 ~]$ netca & [1] 23564 [oracle@oel1 ~]$ Oracle Net Services Configuration: oel1... Deleted listener: LISTENER_OEL1 oel2... Deleted listener: LISTENER_OEL2 oel3... Deleted listener: LISTENER_OEL3 Oracle Net Services configuration successful. The exit code is 0 [1]+ Done netca [oracle@oel1 ~]$ |
节点二:
1 2 3 4 5 6 7 8 9 10 |
[oracle@oel2 admin]$ Oracle Net Services Configuration: oel1... Deleted listener: LISTENER_OEL1 oel2... Deleted listener: LISTENER_OEL2 oel3... Deleted listener: LISTENER_OEL3 [oracle@oel2 admin]$ |
节点三:
1 2 3 4 5 6 7 8 9 10 |
[oracle@oel3 admin]$ Oracle Net Services Configuration: oel1... Deleted listener: LISTENER_OEL1 oel2... Deleted listener: LISTENER_OEL2 oel3... Deleted listener: LISTENER_OEL3 [oracle@oel3 admin]$ |
当监听器全部删除之后,集群资源的状态如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@oel1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....L1.inst application 0/5 0/0 ONLINE ONLINE oel1 ora....L2.inst application 0/5 0/0 ONLINE ONLINE oel2 ora.ORCL.db application 0/0 0/1 ONLINE ONLINE oel3 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.gsd application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.ons application 0/3 0/0 ONLINE ONLINE oel1 ora.oel1.vip application 0/0 0/0 ONLINE ONLINE oel1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.gsd application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.ons application 0/3 0/0 ONLINE ONLINE oel2 ora.oel2.vip application 0/0 0/0 ONLINE ONLINE oel2 ora.oel3.gsd application 0/5 0/0 ONLINE ONLINE oel3 ora.oel3.ons application 0/3 0/0 ONLINE ONLINE oel3 ora.oel3.vip application 0/0 0/0 ONLINE ONLINE oel3 [root@oel1 ~]# |
2.在任意一节点上执行NETCA,创建群集的监听器:LISTENER
这里选择节点二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
[oracle@oel2 admin]$ Oracle Net Services Configuration: Configuring Listener:LISTENER Default local naming configuration complete. oel1... oel2... oel3... Listener configuration complete. Oracle Net Services configuration successful. The exit code is 0 [1]+ Done netca [oracle@oel2 admin]$ |
创建完成后,集群资源的状态如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
[root@oel1 ~]# /u01/app/10g/grid/bin/crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....L1.inst application 0/5 0/0 ONLINE ONLINE oel1 ora....L2.inst application 0/5 0/0 ONLINE ONLINE oel2 ora.ORCL.db application 0/0 0/1 ONLINE ONLINE oel3 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE oel1 ora....L1.lsnr application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.gsd application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.ons application 0/3 0/0 ONLINE ONLINE oel1 ora.oel1.vip application 0/0 0/0 ONLINE ONLINE oel1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE oel2 ora....L2.lsnr application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.gsd application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.ons application 0/3 0/0 ONLINE ONLINE oel2 ora.oel2.vip application 0/0 0/0 ONLINE ONLINE oel2 ora....L3.lsnr application 0/5 0/0 ONLINE ONLINE oel3 ora.oel3.gsd application 0/5 0/0 ONLINE ONLINE oel3 ora.oel3.ons application 0/3 0/0 ONLINE ONLINE oel3 ora.oel3.vip application 0/0 0/0 ONLINE ONLINE oel3 [root@oel1 ~]# |
六、使用DBCA给新节点增加Oracle实例(Instance)
在原有RAC的节点上调用DBCA。
这里选择节点一:
1 2 3 4 5 6 7 8 9 |
[root@oel1 ~]# su - oracle [oracle@oel1 ~]$ export DISPLAY=192.168.119.1:0.0 [oracle@oel1 ~]$ xhost + access control disabled, clients can connect from any host [oracle@oel1 ~]$ [oracle@oel1 ~]$ dbca & [1] 29349 [oracle@oel1 ~]$ [oracle@oel1 ~]$ |
实例增加前的RAC资源状态:
1 2 3 4 5 6 |
[oracle@oel1 ~]$ /u01/app/10g/grid/bin/crs_stat -t -v | grep --color db ora.ORCL.db application 0/0 0/1 ONLINE ONLINE oel3 [oracle@oel1 ~]$ /u01/app/10g/grid/bin/crs_stat -t -v | grep --color inst ora....L1.inst application 0/5 0/0 ONLINE ONLINE oel1 ora....L2.inst application 0/5 0/0 ONLINE ONLINE oel2 [oracle@oel1 ~]$ |
再次查看RAC资源状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[oracle@oel1 ~]$ /u01/app/10g/grid/bin/crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....L1.inst application 0/5 0/0 ONLINE ONLINE oel1 ora....L2.inst application 0/5 0/0 ONLINE ONLINE oel2 ora....L3.inst application 0/5 0/0 ONLINE ONLINE oel3 ora.ORCL.db application 0/0 0/1 ONLINE ONLINE oel3 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE oel1 ora....L1.lsnr application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.gsd application 0/5 0/0 ONLINE ONLINE oel1 ora.oel1.ons application 0/3 0/0 ONLINE ONLINE oel1 ora.oel1.vip application 0/0 0/0 ONLINE ONLINE oel1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE oel2 ora....L2.lsnr application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.gsd application 0/5 0/0 ONLINE ONLINE oel2 ora.oel2.ons application 0/3 0/0 ONLINE ONLINE oel2 ora.oel2.vip application 0/0 0/0 ONLINE ONLINE oel2 ora....SM3.asm application 0/5 0/0 ONLINE ONLINE oel3 ora....L3.lsnr application 0/5 0/0 ONLINE ONLINE oel3 ora.oel3.gsd application 0/5 0/0 ONLINE ONLINE oel3 ora.oel3.ons application 0/3 0/0 ONLINE ONLINE oel3 ora.oel3.vip application 0/0 0/0 ONLINE ONLINE oel3 [oracle@oel1 ~]$ /u01/app/10g/grid/bin/crs_stat -t -v | grep --color db ora.ORCL.db application 0/0 0/1 ONLINE ONLINE oel3 [oracle@oel1 ~]$ /u01/app/10g/grid/bin/crs_stat -t -v | grep --color inst ora....L1.inst application 0/5 0/0 ONLINE ONLINE oel1 ora....L2.inst application 0/5 0/0 ONLINE ONLINE oel2 ora....L3.inst application 0/5 0/0 ONLINE ONLINE oel3 [oracle@oel1 ~]$ |
七、节点三连接数据库查看集群状态
节点三操作:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
[root@oel3 ~]# su - oracle [oracle@oel3 ~]$ ps -ef | grep --color pmon oracle 29251 1 0 16:14 ? 00:00:00 asm_pmon_+ASM3 oracle 29865 1 0 16:15 ? 00:00:00 ora_pmon_ORCL3 oracle 31187 31088 0 16:18 pts/2 00:00:00 grep --color pmon [oracle@oel3 ~]$ export ORACLE_SID=ORCL3 [oracle@oel3 ~]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.5.0 - Production on Fri Sep 19 16:18:34 2014 Copyright (c) 1982, 2010, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options SQL> desc gv$instance Name Null? Type ----------------------------------------- -------- ---------------------------- INST_ID NUMBER INSTANCE_NUMBER NUMBER INSTANCE_NAME VARCHAR2(16) HOST_NAME VARCHAR2(64) VERSION VARCHAR2(17) STARTUP_TIME DATE STATUS VARCHAR2(12) PARALLEL VARCHAR2(3) THREAD# NUMBER ARCHIVER VARCHAR2(7) LOG_SWITCH_WAIT VARCHAR2(15) LOGINS VARCHAR2(10) SHUTDOWN_PENDING VARCHAR2(3) DATABASE_STATUS VARCHAR2(17) INSTANCE_ROLE VARCHAR2(18) ACTIVE_STATE VARCHAR2(9) BLOCKED VARCHAR2(3) SQL> col instance_name for a8 SQL> col host_name for a12 SQL> set linesize 300 SQL> SQL> select inst_id,instance_number,instance_name,host_name,version,status from gv$instance; INST_ID INSTANCE_NUMBER INSTANCE HOST_NAME VERSION STATUS ---------- --------------- -------- ------------ ----------------- ------------ 3 3 ORCL3 oel3 10.2.0.5.0 OPEN 2 2 ORCL2 oel2 10.2.0.5.0 OPEN 1 1 ORCL1 oel1 10.2.0.5.0 OPEN SQL> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options [oracle@oel3 ~]$ |
八、终了
至此,给基于OCFS2的Oracle RAC 10g增加新节点的操作,全部完成。
——————————————————————-
Ending。