Install Oracle database 11g RAC on RHEL6
本文将呈现:在RHEL6上部署Oracle database 11g数据库RAC架构的细节。
在开始本文前,首先说明一下我的环境以及需要预先作出的配置:
如图所示:
Openfiler,提供RAC架构需要的共享存储的iSCSI LUN的分发
LinuxCenterService,提供对RAC架构的基础服务的支持,诸如:DNS域名解析、NTP时间服务,…等
Oracle11gRAC1与Oracle11gRAC2,为RAC架构中的节点服务器
由于是虚拟机,无法分配更多的资源,所以关于本文档最后实现的RAC架构的说明,如下:
Openfiler:
分发iSCSI LUN:
LUN名称:oracle11grac
容量:20 GB
共享存储的划分(这一部分也可以看成是:ASM Disk Group,分配策略):
OCR + Vote Disk
容量:1 GB
Rawdevices Mask:raw1
Data01
容量:19 GB
Rawdevices Mask:raw2
关于开始本文档的说明:
在开始本文档前,你需要像上面文首的截图中所示的那样:
1.成功安装了:VMware workstation 10.0.4
2.成功的创建了虚拟机,并且部署好了需要的系统:
Openfiler:openfiler 2.99
LinuxCenterService:Red Hat Enterprise Linux 5 64bit
Oracle11gRAC1/2:Red Hat Enterprise Linux 6 64bit
一、LinuxCenterService(后文简称:中心服务器)需要作出的准备
对于本文档,你需要中心服务器实现一个NTP的Server端,具体的做法请参考下文:
NTP Server在Linux上的部署,http://d-prototype.com/archives/2306
二、Openfiler Server
关于存储服务器:Openfiler的部署可以参考日志:http://d-prototype.com/archives/423
关于它的配置,可以参考以前的RAC部署文档中的Openfiler部分的说明,它们没有本质的差别,都是类似的,或者说雷同的:
在该文档(http://d-prototype.com/archives/1463)中,Ctrl + F检索“Openfiler”,即可找到该部分需要的文档。
三、Oracle11gRAC1/2节点服务器需要作出的配置
主机名:/etc/sysconfig/network
默认情况下,安装好了操作系统,主机名是:localhost.localdomain,这显然是不满足需求的。
你可以通过“host
为了永久的生效对主机名的修改,你需要修改:/etc/sysconfig/network中的“HOSTNAME”条目的值,如下:
1 2 3 4 5 |
[root@Oracle11gRAC1 ~]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=Oracle11gRAC1 NTPSERVERARGS=iburst [root@Oracle11gRAC1 ~]# |
关闭防火墙与SELINUX:
为了防止防火墙意想不到的拦截了RAC需要用到的端口与网络通讯,以及为了防止SELINUX意外的阻碍了RAC的程序对操作系统的资源的调度,最好选择在安装前就关闭它们,并且在RAC运行后,也不予开启。
具体的做法如下:
关闭防火墙:chkconfig iptables off
1 2 3 4 5 6 7 8 |
[root@Oracle11gRAC1 ~]# chkconfig iptables off [root@Oracle11gRAC1 ~]# chkconfig iptables --list iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# service iptables stop [root@Oracle11gRAC1 ~]# service iptables status iptables: Firewall is not running. [root@Oracle11gRAC1 ~]# |
禁用SELINUX:
停用SELINUX需要修改:/etc/selinux/config中的“SELINUX”条目的值为:disable,默认,该值为:enforcing。
具体如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
[root@Oracle11gRAC1 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted [root@Oracle11gRAC1 ~]# |
需要注意的是:对于SELINUX的修改不会立即生效,而是在下一次操作系统重启后生效的。
查看当前的SELINUX状态:
1 2 3 4 5 |
[root@Oracle11gRAC1 ~]# sestatus SELinux status: disabled [root@Oracle11gRAC1 ~]# getenforce Disabled [root@Oracle11gRAC1 ~]# |
配置HOSTS表
在Linux中,默认情况下,解析主机名与IP地址的关联的时候,本地的HOSTS表是优先于DNS解析被查看的。所以,不论是否真的有配置DNS服务,本地的HOSTS表都是需要被适当的配置的点。
根据我的环境,我的Linux本地HOSTS表的配置如下所示:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
[root@Oracle11gRAC1 ~]# cat /etc/hosts # Local 127.0.0.1 localhost # Shared Storage 192.168.80.128 openfiler # Linux Center Service 192.168.80.131 LinuxCenterService # Oracle database 11g RAC Node: Public 192.168.80.129 Oracle11gRAC1 192.168.80.130 Oracle11gRAC2 # Oracle database 11g RAC Node: Virtual IP 192.168.80.229 Oracle11gRAC1-vip 192.168.80.230 Oracle11gRAC2-vip # Oracle database 11g RAC Node: Private 192.168.132.129 Oracle11gRAC1-priv 192.168.132.130 Oracle11gRAC2-priv # Oracle database 11g RAC: SCAN IP 192.168.80.141 scan1 [root@Oracle11gRAC1 ~]# |
按照上面的主机名与IP的关系,配置各个节点服务器的公共网卡与私有网卡:eth0、eth1
与网卡有关的配置文件有:
/etc/sysconfig/network-scripts/ifcfg-ethX
/etc/resolv.conf
对于Oracle RAC架构来说,你需要保障的是:公共网卡需要明确的指定网关地址。
这里有一个可选的建议:网卡的IP地址最好是静态指定的,而不是DHCP动态分配。
具体做法,如下所示:
(节点一)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
[root@Oracle11gRAC1 ~]# cat /etc/resolv.conf # Generated by NetworkManager #domain localdomain #search localdomain nameserver 192.168.80.131 [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" #BOOTPROTO="dhcp" BOOTPROTO="static" HWADDR="00:0C:29:7C:15:87" IPV6INIT="no" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="5ec4f683-a0af-4f6a-bb9a-73ce3f04d9f6" IPADDR=192.168.80.129 NETMASK=255.255.255.0 GATEWAY=192.168.80.1 [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR=00:0C:29:7C:15:91 TYPE=Ethernet UUID=85ad4ef2-477e-4ad2-b432-549834deed62 ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=static IPADDR=192.168.132.129 NETMASK=255.255.255.0 [root@Oracle11gRAC1 ~]# |
上述的配置,为在节点一上的状态,对于节点二,网卡部分的配置是类似的。
时间同步:
在我的环境中,已经配置了时间同步服务器:192.168.80.131。
所以,需要将RAC的节点服务器以及RAC的共享存储服务器的时间同步指向到该地址。
需要注意的是在节点服务器中,NTPD的启动参数需要加上“-x”,具体如下所示:
1 2 3 4 |
[root@Oracle11gRAC1 ~]# cat /etc/sysconfig/ntpd # Drop root to id 'ntp:ntp' by default. OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid -g" [root@Oracle11gRAC1 ~]# |
在默认配置中,是没有“-x”的。
启用了“-x”之后,你可以在ntpd服务启动后看到进程的差异:
没有启用“-x”的进程状态:
1 2 3 4 |
[root@Oracle11gRAC1 ~]# ps -ef | grep --color ntp root 2276 2224 0 22:44 pts/0 00:00:00 grep --color ntp ntp 4576 1 0 18:17 ? 00:00:02 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g [root@Oracle11gRAC1 ~]# |
启用了“-x”的进程状态:
1 2 3 4 |
[root@Oracle11gRAC1 ~]# ps -ef | grep --color ntp ntp 2290 1 0 22:44 ? 00:00:00 ntpd -x -u ntp:ntp -p /var/run/ntpd.pid -g root 2293 2224 0 22:44 pts/0 00:00:00 grep --color ntp [root@Oracle11gRAC1 ~]# |
NTP的配置:
你可以通过图形化工具(system-config-date)配置时间同步策略。
你也可以直接修改配置文件:/etc/ntp.conf实现相同的目的,因为图形化工具,最终也是要修改的该配置文件的。
1 2 3 |
[root@Oracle11gRAC1 ~]# cat /etc/ntp.conf | grep --color "192.168.80.131" server 192.168.80.131 iburst [root@Oracle11gRAC1 ~]# |
如上所示,在配置文件中加上“server
然后,启动或重启“ntpd”服务:service ntpd restart。
通过“ntpstat”,你可以看当前NTP服务的状态,你需要它们如下显示,方为正常工作:
1 2 3 4 5 |
[root@Oracle11gRAC1 ~]# ntpstat synchronised to NTP server (192.168.80.131) at stratum 12 time correct to within 16 ms polling server every 64 s [root@Oracle11gRAC1 ~]# |
YUM与软件包:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
[root@Oracle11gRAC1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 18G 2.5G 15G 15% / tmpfs 935M 72K 935M 1% /dev/shm /dev/sda1 291M 39M 238M 14% /boot /dev/sr1 3.6G 3.6G 0 100% /iso [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# cat /etc/yum.repos.d/iso.repo [ISO] name=ISO baseurl=file:///iso gpgcheck=0 [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# yum repolist Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. ISO | 3.9 kB 00:00 ... repo id repo name status ISO ISO 3,690 repolist: 3,690 [root@Oracle11gRAC1 ~]# |
软件包的安装:
yum install -y binutils.x86_64 compat-libcap1.x86_64 compat-libstdc++-33.i686 compat-libstdc++-33.x86_64 gcc.x86_64 gcc-c++.x86_64 glibc.x86_64 glibc.i686 glibc-devel.x86_64 glibc-devel.i686 ksh.x86_64 libgcc.x86_64 libgcc.i686 libstdc++.i686 libstdc++.x86_64 libstdc++-devel.i686 libstdc++-devel.x86_64 libaio.x86_64 libaio.i686 libaio-devel.i686 libaio-devel.x86_64 libXext.x86_64 libXext.i686 libXtst.x86_64 libXtst.i686 libX11.x86_64 libX11.i686 libXau.x86_64 libXau.i686 libxcb.x86_64 libxcb.i686 libXi.x86_64 libXi.i686 make.x86_64 sysstat.x86_64 elfutils-libelf-devel.i686 elfutils-libelf-devel.x86_64
文件:/etc/security/limits.conf
1 2 3 4 5 6 7 8 9 10 11 12 |
# Oracle Database 11g RAC # for grid grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 # for oracle oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 |
文件:/etc/sysctl.conf
1 2 3 4 5 6 7 8 9 10 11 12 |
# for Oracle database 11g RAC kernel.shmmax = 4294967295 kernel.shmall = 2097152 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default=262144 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048576 fs.aio-max-nr=1048576 |
对于sysctl.conf文件的修改立即生效:sysctl -p,具体如下所示:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
[root@Oracle11gRAC1 ~]# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmax = 4294967295 kernel.shmall = 2097152 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 [root@Oracle11gRAC1 ~]# |
文件:/etc/pam.d/login
1 2 |
# Oracle Database 11g RAC session required pam_limits.so |
创建用户组与用户:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@Oracle11gRAC1 ~]# cat create_user.sh groupadd -g 1000 oinstall groupadd -g 1200 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -c "Grid Infrastructure Owner" grid echo "oracle" | passwd grid --stdin groupadd -g 1300 dba groupadd -g 1301 oper useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -c "Oracle Software Owner" oracle echo "oracle" | passwd oracle --stdin [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# sh create_user.sh Changing password for user grid. passwd: all authentication tokens updated successfully. Changing password for user oracle. passwd: all authentication tokens updated successfully. [root@Oracle11gRAC1 ~]# |
用户环境变量:
GRID:/home/grid/.bash_profile
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# Oracle Database 11g RAC # Grid Node1 ORACLE_SID=+ASM1; export ORACLE_SID JAVA_HOME=/usr/local/java; export JAVA_HOME ORACLE_BASE=/u01/app/grid; export ORACLE_BASE ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH ORACLE_TERM=xterm; export ORACLE_TERM NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11 PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH THREADS_FLAG=native; export THREADS_FLAG export TEMP=/tmp export TMPDIR=/tmp |
ORACLE:/home/oracle/.bash_profile
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
# Oracle Database 11g RAC # Oracle Node1 ORACLE_SID=edendb1; export ORACLE_SID ORACLE_UNQNAME=edendb; export ORACLE_UNQNAME JAVA_HOME=/usr/local/java; export JAVA_HOME ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH ORACLE_TERM=xterm; export ORACLE_TERM NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11 PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH THREADS_FLAG=native; export THREADS_FLAG export TEMP=/tmp export TMPDIR=/tmp |
创建需要的目录结构:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@Oracle11gRAC1 ~]# mkdir /u01/app/oracle -p [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# chown -R grid.oinstall /u01 [root@Oracle11gRAC1 ~]# chown -R oracle.oinstall /u01/app/oracle/ [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# chmod -R 775 /u01 [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# su - grid [grid@Oracle11gRAC1 ~]$ mkdir -p $ORACLE_BASE [grid@Oracle11gRAC1 ~]$ mkdir -p $ORACLE_HOME [grid@Oracle11gRAC1 ~]$ [grid@Oracle11gRAC1 ~]$ exit logout [root@Oracle11gRAC1 ~]# su - oracle [oracle@Oracle11gRAC1 ~]$ mkdir -p $ORACLE_HOME [oracle@Oracle11gRAC1 ~]$ [oracle@Oracle11gRAC1 ~]$ exit logout [root@Oracle11gRAC1 ~]# |
SSH等价性:
ssh-keygen
ssh-copy-id
(节点一)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
[root@Oracle11gRAC1 ~]# su - grid [grid@Oracle11gRAC1 ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Created directory '/home/grid/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: 53:61:e9:61:db:37:e4:b1:7d:d3:c5:35:40:37:24:cf grid@Oracle11gRAC1 The key's randomart image is: +--[ RSA 2048]----+ | o..+o*o| | .+. B =| | o.+ o Eo| | .o . =.+| | S . .o| | . | | | | | | | +-----------------+ [grid@Oracle11gRAC1 ~]$ [grid@Oracle11gRAC1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub Oracle11gRAC1 The authenticity of host 'oracle11grac1 (192.168.80.129)' can't be established. RSA key fingerprint is fc:ea:16:b9:fe:25:9f:cd:bf:48:eb:ca:2c:e7:a4:62. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'oracle11grac1,192.168.80.129' (RSA) to the list of known hosts. grid@oracle11grac1's password: Now try logging into the machine, with "ssh 'Oracle11gRAC1'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@Oracle11gRAC1 ~]$ [grid@Oracle11gRAC1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub Oracle11gRAC1-priv The authenticity of host 'oracle11grac1-priv (192.168.132.129)' can't be established. RSA key fingerprint is fc:ea:16:b9:fe:25:9f:cd:bf:48:eb:ca:2c:e7:a4:62. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'oracle11grac1-priv,192.168.132.129' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'Oracle11gRAC1-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@Oracle11gRAC1 ~]$ [grid@Oracle11gRAC1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub Oracle11gRAC2 The authenticity of host 'oracle11grac2 (192.168.80.130)' can't be established. RSA key fingerprint is f6:84:45:62:94:52:b8:ee:fe:67:33:4b:38:33:d1:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'oracle11grac2,192.168.80.130' (RSA) to the list of known hosts. grid@oracle11grac2's password: Now try logging into the machine, with "ssh 'Oracle11gRAC2'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@Oracle11gRAC1 ~]$ [grid@Oracle11gRAC1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub Oracle11gRAC2-priv The authenticity of host 'oracle11grac2-priv (192.168.132.130)' can't be established. RSA key fingerprint is f6:84:45:62:94:52:b8:ee:fe:67:33:4b:38:33:d1:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'oracle11grac2-priv,192.168.132.130' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'Oracle11gRAC2-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@Oracle11gRAC1 ~]$ [grid@Oracle11gRAC1 ~]$ ssh Oracle11gRAC1 date Sun Feb 1 23:27:26 CST 2015 [grid@Oracle11gRAC1 ~]$ ssh Oracle11gRAC1-priv date Sun Feb 1 23:27:30 CST 2015 [grid@Oracle11gRAC1 ~]$ ssh Oracle11gRAC2 date Sun Feb 1 23:27:34 CST 2015 [grid@Oracle11gRAC1 ~]$ ssh Oracle11gRAC2-priv date Sun Feb 1 23:27:44 CST 2015 [grid@Oracle11gRAC1 ~]$ [grid@Oracle11gRAC1 ~]$ |
共享存储的划分:
在本环境中,共享存储由Openfiler分发。
首先,需要在节点服务器上或得到Openfiler分发的LUN,具体如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
[root@Oracle11gRAC1 ~]# yum install -y iscsi* Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package iscsi-initiator-utils.x86_64 0:6.2.0.873-10.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================================== Package Arch Version Repository Size ============================================================================================================================================================================== Installing: iscsi-initiator-utils x86_64 6.2.0.873-10.el6 ISO 686 k Transaction Summary ============================================================================================================================================================================== Install 1 Package(s) Total download size: 686 k Installed size: 2.3 M Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : iscsi-initiator-utils-6.2.0.873-10.el6.x86_64 1/1 Verifying : iscsi-initiator-utils-6.2.0.873-10.el6.x86_64 1/1 Installed: iscsi-initiator-utils.x86_64 0:6.2.0.873-10.el6 Complete! [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# chkconfig iscsi on [root@Oracle11gRAC1 ~]# chkconfig iscsid on [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# service iscsi start [root@Oracle11gRAC1 ~]# service iscsid start [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler 192.168.80.128:3260,1 oracle11g [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# iscsiadm -m node -T oracle11g sendtargets -p openfiler -l Logging in to [iface: default, target: oracle11g, portal: 192.168.80.128,3260] (multiple) Login to [iface: default, target: oracle11g, portal: 192.168.80.128,3260] successful. [root@Oracle11gRAC1 ~]# |
然后,对共享存储分区:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
[root@Oracle11gRAC1 ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000b22 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 2611 20458496 8e Linux LVM Disk /dev/mapper/vg_oracle11grac1-lv_root: 18.8 GB, 18798870528 bytes 255 heads, 63 sectors/track, 2285 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_oracle11grac1-lv_swap: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb: 16.4 GB, 16374562816 bytes 64 heads, 32 sectors/track, 15616 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x9ed2f535. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 16.4 GB, 16374562816 bytes 64 heads, 32 sectors/track, 15616 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9ed2f535 Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-15616, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-15616, default 15616): +1G Command (m for help): p Disk /dev/sdb: 16.4 GB, 16374562816 bytes 64 heads, 32 sectors/track, 15616 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9ed2f535 Device Boot Start End Blocks Id System /dev/sdb1 1 1025 1049584 83 Linux Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (1026-15616, default 1026): Using default value 1026 Last cylinder, +cylinders or +size{K,M,G} (1026-15616, default 15616): Using default value 15616 Command (m for help): p Disk /dev/sdb: 16.4 GB, 16374562816 bytes 64 heads, 32 sectors/track, 15616 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9ed2f535 Device Boot Start End Blocks Id System /dev/sdb1 1 1025 1049584 83 Linux /dev/sdb2 1026 15616 14941184 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@Oracle11gRAC1 ~]# partprobe /dev/sdb [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# fdisk -l /dev/sdb Disk /dev/sdb: 16.4 GB, 16374562816 bytes 64 heads, 32 sectors/track, 15616 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9ed2f535 Device Boot Start End Blocks Id System /dev/sdb1 1 1025 1049584 83 Linux /dev/sdb2 1026 15616 14941184 83 Linux [root@Oracle11gRAC1 ~]# |
配置RAW:
文件:/etc/rc.local
1 2 3 4 5 6 |
# Raw Devices # OCR + Vote disk raw /dev/raw/raw1 /dev/sdb1 # Data01 raw /dev/raw/raw2 /dev/sdb2 |
配置UDEV:
/etc/udev/rules.d/60-raw.rules
1 2 |
# for Oracle database 11g RAC KERNEL=="raw*",OWNER="grid",GROUP="asmadmin",MODE="0660" |
生成RAW:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[root@Oracle11gRAC1 ~]# ll /dev/raw/ total 0 crw-rw----. 1 root disk 162, 0 Feb 2 11:49 rawctl [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# sh /etc/rc.local /dev/raw/raw1: bound to major 8, minor 17 /dev/raw/raw2: bound to major 8, minor 18 [root@Oracle11gRAC1 ~]# [root@Oracle11gRAC1 ~]# ll /dev/raw/ total 0 crw-rw----. 1 grid asmadmin 162, 1 Feb 2 15:08 raw1 crw-rw----. 1 grid asmadmin 162, 2 Feb 2 15:08 raw2 crw-rw----. 1 root disk 162, 0 Feb 2 11:49 rawctl [root@Oracle11gRAC1 ~]# |
至此,节点服务器的操作系统准备就完成了。
四、Grid Infrastructure
接下来,就可以上传软件介质到服务器了。
按照我的个人习惯,我会将上传的介质保存在:/software。
解压:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[root@Oracle11gRAC1 ~]# su - grid [grid@Oracle11gRAC1 ~]$ cd /software [grid@Oracle11gRAC1 software]$ ll total 1177008 -rw-r--r--. 1 root root 1205251894 Feb 2 15:24 p13390677_112040_Linux-x86-64_3of7.zip [grid@Oracle11gRAC1 software]$ [grid@Oracle11gRAC1 software]$ du -sh * 1.2G p13390677_112040_Linux-x86-64_3of7.zip [grid@Oracle11gRAC1 software]$ [grid@Oracle11gRAC1 software]$ unzip p13390677_112040_Linux-x86-64_3of7.zip (... ... 过多的输出) [grid@Oracle11gRAC1 software]$ ll total 1177012 drwxr-xr-x. 7 grid oinstall 4096 Aug 27 2013 grid -rw-r--r--. 1 root root 1205251894 Feb 2 15:24 p13390677_112040_Linux-x86-64_3of7.zip [grid@Oracle11gRAC1 software]$ cd grid [grid@Oracle11gRAC1 grid]$ ls install readme.html response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html [grid@Oracle11gRAC1 grid]$ |
安装CVUQD软件包:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@oracle11grac1 ~]# cd /software/ [root@oracle11grac1 software]# cd grid/ [root@oracle11grac1 grid]# ls install readme.html response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html [root@oracle11grac1 grid]# [root@oracle11grac1 grid]# pwd /software/grid [root@oracle11grac1 grid]# ls install readme.html response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html [root@oracle11grac1 grid]# ls rpm/ cvuqdisk-1.0.9-1.rpm [root@oracle11grac1 grid]# [root@oracle11grac1 grid]# rpm -ivh rpm/cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] Using default group oinstall to install package 1:cvuqdisk ########################################### [100%] [root@oracle11grac1 grid]# [root@oracle11grac1 grid]# |
运行“./runInstaller”调出OUI,图形化安装:
1 2 3 4 5 6 7 8 |
[root@oracle11grac1 grid]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@oracle11grac1 grid]# |
2.节点二:orainstRoot.sh
1 2 3 4 5 6 7 8 |
[root@oracle11grac2 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@oracle11grac2 ~]# |
3.节点一:root.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
[root@oracle11grac1 grid]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to upstart CRS-2672: Attempting to start 'ora.mdnsd' on 'oracle11grac1' CRS-2676: Start of 'ora.mdnsd' on 'oracle11grac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'oracle11grac1' CRS-2676: Start of 'ora.gpnpd' on 'oracle11grac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'oracle11grac1' CRS-2672: Attempting to start 'ora.gipcd' on 'oracle11grac1' CRS-2676: Start of 'ora.cssdmonitor' on 'oracle11grac1' succeeded CRS-2676: Start of 'ora.gipcd' on 'oracle11grac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'oracle11grac1' CRS-2672: Attempting to start 'ora.diskmon' on 'oracle11grac1' CRS-2676: Start of 'ora.diskmon' on 'oracle11grac1' succeeded CRS-2676: Start of 'ora.cssd' on 'oracle11grac1' succeeded ASM created and started successfully. Disk Group CRS created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 6142d5fd436d4f72bff1956ef9316503. Successfully replaced voting disk group with +CRS. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 6142d5fd436d4f72bff1956ef9316503 (/dev/raw/raw1) [CRS] Located 1 voting disk(s). CRS-2672: Attempting to start 'ora.asm' on 'oracle11grac1' CRS-2676: Start of 'ora.asm' on 'oracle11grac1' succeeded CRS-2672: Attempting to start 'ora.CRS.dg' on 'oracle11grac1' CRS-2676: Start of 'ora.CRS.dg' on 'oracle11grac1' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@oracle11grac1 grid]# |
4.节点二:root.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[root@oracle11grac2 ~]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to upstart CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node oracle11grac1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@oracle11grac2 ~]# |
这里的报错可以忽略,CVU在做自检的时候,确实是有点问题的,这是官方证实了的BUG。
查看下当前的CRS状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[root@oracle11grac1 grid]# su - grid [grid@oracle11grac1 ~]$ [grid@oracle11grac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE orac...rac1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE orac...rac1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE orac...rac1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE orac...rac1 ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE orac...rac1 ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE ora....network ora....rk.type 0/5 0/ ONLINE ONLINE orac...rac1 ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE orac...rac1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE orac...rac1 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE orac...rac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE orac...rac1 ora....ac1.gsd application 0/5 0/0 OFFLINE OFFLINE ora....ac1.ons application 0/3 0/0 ONLINE ONLINE orac...rac1 ora....ac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orac...rac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE orac...rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE orac...rac2 ora....ac2.gsd application 0/5 0/0 OFFLINE OFFLINE ora....ac2.ons application 0/3 0/0 ONLINE ONLINE orac...rac2 ora....ac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orac...rac2 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE orac...rac1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE orac...rac1 [grid@oracle11grac1 ~]$ |
五、Database
接下来是Database软件的安装。
解压:
1 2 3 4 5 6 7 |
[root@oracle11grac1 software]# su - oracle [oracle@oracle11grac1 ~]$ cd /software [oracle@oracle11grac1 software]$ ll total 2487208 -rw-r--r-- 1 root root 1395582860 Feb 2 17:55 p13390677_112040_Linux-x86-64_1of7.zip -rw-r--r-- 1 root root 1151304589 Feb 2 17:55 p13390677_112040_Linux-x86-64_2of7.zip [oracle@oracle11grac1 software]$ |
Command is:
unzip p13390677_112040_Linux-x86-64_1of7.zip ;unzip p13390677_112040_Linux-x86-64_2of7.zip
解压后,“./runInstaller”调用OUI:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[oracle@oracle11grac1 software]$ ll total 2487212 drwxr-xr-x 7 oracle oinstall 4096 Aug 27 2013 database -rw-r--r-- 1 root root 1395582860 Feb 2 17:55 p13390677_112040_Linux-x86-64_1of7.zip -rw-r--r-- 1 root root 1151304589 Feb 2 17:55 p13390677_112040_Linux-x86-64_2of7.zip [oracle@oracle11grac1 software]$ cd database/ [oracle@oracle11grac1 database]$ ll total 60 drwxr-xr-x 4 oracle oinstall 4096 Aug 27 2013 install -rw-r--r-- 1 oracle oinstall 30016 Aug 27 2013 readme.html drwxr-xr-x 2 oracle oinstall 4096 Aug 27 2013 response drwxr-xr-x 2 oracle oinstall 4096 Aug 27 2013 rpm -rwxr-xr-x 1 oracle oinstall 3267 Aug 27 2013 runInstaller drwxr-xr-x 2 oracle oinstall 4096 Aug 27 2013 sshsetup drwxr-xr-x 14 oracle oinstall 4096 Aug 27 2013 stage -rw-r--r-- 1 oracle oinstall 500 Aug 27 2013 welcome.html [oracle@oracle11grac1 database]$ |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[root@oracle11grac1 software]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions. [root@oracle11grac1 software]# [root@oracle11grac1 software]# |
2.节点二:root.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@oracle11grac2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions. [root@oracle11grac2 ~]# |
六、ASM Disk Group
在开始建库之前,需要配置存储数据库数据文件的ASM磁盘组。
完成这部分的配置需要使用到:asmca。
在grid用户下,使用“asmca”可以调出asmca的图形化界面:
1 2 3 4 5 6 7 |
[root@oracle11grac1 software]# su - grid [grid@oracle11grac1 ~]$ export DISPLAY=168.0.1.185:0.0 [grid@oracle11grac1 ~]$ xhost + access control disabled, clients can connect from any host [grid@oracle11grac1 ~]$ [grid@oracle11grac1 ~]$ asmca & [grid@oracle11grac1 ~]$ |
也可以通过字符界面查看,如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[grid@oracle11grac1 ~]$ whoami grid [grid@oracle11grac1 ~]$ ps -ef | grep pmon --color grid 4927 1 0 17:37 ? 00:00:00 asm_pmon_+ASM1 grid 24769 23665 0 18:39 pts/1 00:00:00 grep pmon --color [grid@oracle11grac1 ~]$ [grid@oracle11grac1 ~]$ env | grep SID ORACLE_SID=+ASM1 [grid@oracle11grac1 ~]$ [grid@oracle11grac1 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 1024 628 0 628 0 Y CRS/ MOUNTED EXTERN N 512 4096 1048576 14591 14496 0 14496 0 N DATA01/ ASMCMD> ASMCMD> quit [grid@oracle11grac1 ~]$ [grid@oracle11grac1 ~]$ |
如上,这样ASM磁盘组的创建就完成了。
七、创建实例(建库)
这一部分通过DBCA实现。
通过oracle用户,执行“dbca”调出DBCA的图形界面:
1 2 3 4 |
[root@oracle11grac1 software]# su - oracle [oracle@oracle11grac1 ~]$ dbca & [1] 25258 [oracle@oracle11grac1 ~]$ |
至此,DBCA建库完成。
此时的RAC的CRS资源如下所示:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[root@oracle11grac1 ~]# su - grid [grid@oracle11grac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE orac...rac1 ora.DATA01.dg ora....up.type 0/5 0/ ONLINE ONLINE orac...rac1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE orac...rac1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE orac...rac1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE orac...rac1 ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE orac...rac1 ora.dblogic.db ora....se.type 0/2 0/1 ONLINE ONLINE orac...rac1 ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE ora....network ora....rk.type 0/5 0/ ONLINE ONLINE orac...rac1 ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE orac...rac1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE orac...rac1 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE orac...rac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE orac...rac1 ora....ac1.gsd application 0/5 0/0 OFFLINE OFFLINE ora....ac1.ons application 0/3 0/0 ONLINE ONLINE orac...rac1 ora....ac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orac...rac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE orac...rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE orac...rac2 ora....ac2.gsd application 0/5 0/0 OFFLINE OFFLINE ora....ac2.ons application 0/3 0/0 ONLINE ONLINE orac...rac2 ora....ac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orac...rac2 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE orac...rac1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE orac...rac1 [grid@oracle11grac1 ~]$ |
连入数据库:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
[oracle@oracle11grac1 ~]$ ps -ef | grep ora_ oracle 2151 1 0 19:06 ? 00:00:00 ora_gcr0_dblogic1 oracle 2372 1 0 19:07 ? 00:00:00 ora_w001_dblogic1 oracle 2686 1 0 19:07 ? 00:00:00 ora_q002_dblogic1 oracle 3978 1 0 19:08 ? 00:00:00 ora_q003_dblogic1 oracle 4124 1 0 19:09 ? 00:00:00 ora_o000_dblogic1 oracle 5379 25205 0 19:11 pts/1 00:00:00 grep ora_ oracle 28953 1 0 19:01 ? 00:00:00 ora_pmon_dblogic1 oracle 28957 1 0 19:01 ? 00:00:00 ora_psp0_dblogic1 oracle 28982 1 1 19:01 ? 00:00:07 ora_vktm_dblogic1 oracle 28986 1 0 19:01 ? 00:00:00 ora_gen0_dblogic1 oracle 28988 1 0 19:01 ? 00:00:00 ora_diag_dblogic1 oracle 28991 1 0 19:01 ? 00:00:00 ora_dbrm_dblogic1 oracle 28993 1 0 19:01 ? 00:00:00 ora_ping_dblogic1 oracle 28995 1 0 19:01 ? 00:00:00 ora_acms_dblogic1 oracle 28998 1 0 19:01 ? 00:00:01 ora_dia0_dblogic1 oracle 29000 1 0 19:01 ? 00:00:01 ora_lmon_dblogic1 oracle 29002 1 0 19:01 ? 00:00:03 ora_lmd0_dblogic1 oracle 29004 1 0 19:01 ? 00:00:03 ora_lms0_dblogic1 oracle 29008 1 0 19:01 ? 00:00:00 ora_rms0_dblogic1 oracle 29010 1 0 19:01 ? 00:00:00 ora_lmhb_dblogic1 oracle 29012 1 0 19:01 ? 00:00:00 ora_mman_dblogic1 oracle 29014 1 0 19:01 ? 00:00:00 ora_dbw0_dblogic1 oracle 29016 1 0 19:01 ? 00:00:00 ora_lgwr_dblogic1 oracle 29018 1 0 19:01 ? 00:00:00 ora_ckpt_dblogic1 oracle 29020 1 0 19:01 ? 00:00:00 ora_smon_dblogic1 oracle 29022 1 0 19:01 ? 00:00:00 ora_reco_dblogic1 oracle 29024 1 0 19:01 ? 00:00:00 ora_rbal_dblogic1 oracle 29026 1 0 19:01 ? 00:00:00 ora_asmb_dblogic1 oracle 29028 1 0 19:01 ? 00:00:01 ora_mmon_dblogic1 oracle 29032 1 0 19:01 ? 00:00:00 ora_mmnl_dblogic1 oracle 29034 1 0 19:01 ? 00:00:00 ora_d000_dblogic1 oracle 29036 1 0 19:01 ? 00:00:00 ora_mark_dblogic1 oracle 29038 1 0 19:01 ? 00:00:00 ora_s000_dblogic1 oracle 29047 1 0 19:01 ? 00:00:02 ora_lck0_dblogic1 oracle 29049 1 0 19:01 ? 00:00:00 ora_rsmn_dblogic1 oracle 29078 1 0 19:01 ? 00:00:00 ora_gtx0_dblogic1 oracle 29080 1 0 19:01 ? 00:00:00 ora_rcbg_dblogic1 oracle 29082 1 0 19:01 ? 00:00:00 ora_qmnc_dblogic1 oracle 29084 1 0 19:01 ? 00:00:00 ora_pz99_dblogic1 oracle 29129 1 0 19:02 ? 00:00:00 ora_cjq0_dblogic1 oracle 29145 1 0 19:02 ? 00:00:00 ora_q000_dblogic1 oracle 29147 1 0 19:02 ? 00:00:00 ora_q001_dblogic1 oracle 29155 1 0 19:02 ? 00:00:00 ora_smco_dblogic1 oracle 29228 1 0 19:02 ? 00:00:00 ora_w000_dblogic1 [oracle@oracle11grac1 ~]$ [oracle@oracle11grac1 ~]$ export ORACLE_SID=dblogic1 [oracle@oracle11grac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Mon Feb 2 19:12:17 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select instance_name,status from v$instance; INSTANCE_NAME STATUS ---------------- ------------ dblogic1 OPEN SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production CORE 11.2.0.4.0 Production TNS for Linux: Version 11.2.0.4.0 - Production NLSRTL Version 11.2.0.4.0 - Production SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@oracle11grac1 ~]$ |
八、终了
至此,Oracle database 11g RAC在RHEL6上的双节点搭建,已经全部完成。
——————————————————
Finished。
Hello world.