Oracle RAC 10g,增加节点
如题所示,本文将呈现关于Oracle Database 10g RAC中,危机群增加节点的技术细节。
在我的环境中:
现有RAC的节点为:
节点一:ora10g1
节点二:ora10g2
新增节点:
节点三:ora10g3
——————————————————————————————————
(OP:节点一、节点二)
首先,你需要有一个正在运行的,并且状态正常的RAC环境:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@ora10g2 ~]# su - grid [grid@ora10g2 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.dblogic.db application 0/1 0/1 ONLINE ONLINE ora10g1 ora....c1.inst application 0/5 0/0 ONLINE ONLINE ora10g1 ora....c2.inst application 0/5 0/0 ONLINE ONLINE ora10g2 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE ora10g1 ora....G1.lsnr application 0/5 0/0 ONLINE ONLINE ora10g1 ora....0g1.gsd application 0/5 0/0 ONLINE ONLINE ora10g1 ora....0g1.ons application 0/3 0/0 ONLINE ONLINE ora10g1 ora....0g1.vip application 0/0 0/0 ONLINE ONLINE ora10g1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE ora10g2 ora....G2.lsnr application 0/5 0/0 ONLINE ONLINE ora10g2 ora....0g2.gsd application 0/5 0/0 ONLINE ONLINE ora10g2 ora....0g2.ons application 0/3 0/0 ONLINE ONLINE ora10g2 ora....0g2.vip application 0/0 0/0 ONLINE ONLINE ora10g2 [grid@ora10g2 ~]$ |
(OP:节点三)
其次,你需要按照当前RAC节点的配置部署好新节点的Linux环境,并作出相同的配置。
用户组创建:
groupadd -g 1000 oinstall
groupadd -g 2000 dba
groupadd -g 3000 asmadmin
groupadd -g 4000 asmdba
groupadd -g 5000 asmoper
groupadd -g 6000 oper
用户创建:
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 1200 -g oinstall -G dba,oper,asmdba oracle
结果:
1 2 3 4 5 6 |
[root@ora10g3 ~]# id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),3000(asmadmin),4000(asmdba),5000(asmoper) [root@ora10g3 ~]# [root@ora10g3 ~]# id oracle uid=1200(oracle) gid=1000(oinstall) groups=1000(oinstall),2000(dba),4000(asmdba),6000(oper) [root@ora10g3 ~]# |
用户环境变量:
用户环境变量中,关于GRID和Oracle的安装路径,需要跟现有集群的节点的配置一致:
Grid:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
[root@ora10g3 ~]# cat /home/grid/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle 10g RAC export ORACLE_SID=+ASM3 export JAVA_HOME=/usr/local/java export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/10g/grid export ORACLE_PATH=/u01/app/oracle/common/oracle/sql export ORACLE_TERM=xterm export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS" export TNS_ADMIN=$ORACLE_HOME/network/admin export ORA_NLS11=$ORACLE_HOME/nls/data export PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/u01/app/common/oracle/bin export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export THREADS_FLAG=native export TEMP=/tmp export TMPDIR=/tmp [root@ora10g3 ~]# |
Oracle:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
[root@ora10g3 ~]# cat /home/oracle/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle 10g RAC export ORACLE_SID=dblogic3 export ORACLE_UNQNAME=dblogic export JAVA_HOME=/usr/local/java export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/10g/dbhome_1 export ORACLE_PATH=/u01/app/common/oracle/sql export ORACLE_TERM=xterm export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS" export TNS_ADMIN=$ORACLE_HOME/network/admin export ORA_NLS11=$ORACLE_HOME/nls/data export PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/u01/app/common/oracle/bin export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export THREADS_FLAG=native export TEMP=/tmp export TMPDIR=/tmp [root@ora10g3 ~]# |
节点三的YUM配置:
光盘挂载:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@ora10g3 ~]# ll / | grep --color iso dr-xr-xr-x 8 root root 14336 Sep 10 2013 iso [root@ora10g3 ~]# [root@ora10g3 ~]# cat /etc/fstab LABEL=/ / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 LABEL=SWAP-sda3 swap swap defaults 0 0 /dev/sr0 /iso iso9660 defaults 0 0 [root@ora10g3 ~]# [root@ora10g3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 16G 2.7G 12G 19% / /dev/sda1 289M 17M 258M 7% /boot tmpfs 1004M 0 1004M 0% /dev/shm /dev/sr0 4.1G 4.1G 0 100% /iso [root@ora10g3 ~]# |
YUM源配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
[root@ora10g3 ~]# cat /etc/yum.repos.d/iso.repo [Server] name=Server baseurl=file:///iso/Server gpgcheck=0 [VT] name=VT baseurl=file:///iso/VT gpgcheck=0 [Cluster] name=Cluster baseurl=file:///iso/Cluster gpgcheck=0 [ClusterStorage] name=ClusterStorage baseurl=file:///iso/ClusterStorage gpgcheck=0 [root@ora10g3 ~]# |
Hangcheck_timer模块:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[root@ora10g3 ~]# cat /etc/modprobe.conf alias eth0 e1000 alias eth1 e1000 alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptspi alias scsi_hostadapter2 ata_piix alias scsi_hostadapter3 ahci # HangCheck-time options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 [root@ora10g3 ~]# [root@ora10g3 ~]# cat /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local # HangCheck-time /sbin/modprobe hangcheck-timer [root@ora10g3 ~]# [root@ora10g3 ~]# sh /etc/rc.local [root@ora10g3 ~]# [root@ora10g3 ~]# cat /var/log/messages | grep hangcheck Jan 3 05:26:14 ora10g3 kernel: Hangcheck: starting hangcheck timer 0.9.0 (tick is 30 seconds, margin is 180 seconds). [root@ora10g3 ~]# |
文件:/etc/sysctl.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
[root@ora10g3 ~]# cat /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 # Oracle 10g RAC kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144 [root@ora10g3 ~]# [root@ora10g3 ~]# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144 [root@ora10g3 ~]# |
文件:/etc/security/limits.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
[root@ora10g3 ~]# cat /etc/security/limits.conf # /etc/security/limits.conf # #Each line describes a limit for a user in the form: # #<domain> <type> <item> <value> # #Where: #<domain> can be: # - an user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # #<type> can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # #<item> can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open files # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to # - rtprio - max realtime priority # #<domain> <type> <item> <value> # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file # Oracle 10g RAC * soft nproc 2047 * hard nproc 16384 * soft nofile 1024 * hard nofile 65536 [root@ora10g3 ~]# |
文件:/etc/pam.d/login
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@ora10g3 ~]# cat /etc/pam.d/login #%PAM-1.0 auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so auth include system-auth account required pam_nologin.so account include system-auth password include system-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session optional pam_keyinit.so force revoke session required pam_loginuid.so session include system-auth session optional pam_console.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open # Oracle 10g RAC session required /lib/security/pam_limits.so session required pam_limits.so [root@ora10g3 ~]# |
1.时间同步:
这里我采用的是VMware模拟的环境,因此,我的各个节点虚拟机需要开启时间同步服务
1 2 3 4 5 |
[root@ora10g3 ~]# vmware-toolbox-cmd timesync status Disabled [root@ora10g3 ~]# vmware-toolbox-cmd timesync enable Enabled [root@ora10g3 ~]# |
2.共享存储:
共享存储的状态要与集群当前节点一致:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[root@ora10g3 ~]# service iscsi status iscsid (pid 2913) is running... [root@ora10g3 ~]# service iscsid status iscsid (pid 2913) is running... [root@ora10g3 ~]# [root@ora10g3 ~]# iscsiadm -m discovery -t sendtargets -p openfiler 192.168.195.128:3260,1 ora10grac [root@ora10g3 ~]# [root@ora10g3 ~]# fdisk -l Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 38 305203+ 83 Linux /dev/sda2 39 2099 16554982+ 83 Linux /dev/sda3 2100 2610 4104607+ 82 Linux swap / Solaris Disk /dev/sdb: 20.4 GB, 20468203520 bytes 64 heads, 32 sectors/track, 19520 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 489 500720 83 Linux /dev/sdb2 490 978 500736 83 Linux /dev/sdb3 979 19520 18987008 83 Linux [root@ora10g3 ~]# |
3.确保RAW设备正常:
在我的环境里RAC的OCR与Vote Disk是通过Linux的系统服务:rawdevices模拟出来的。
所以,在上面正确的拿到了对LUN的访问权限之后,还需要确保RAWDeveices服务的正常配置与运行。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
[root@ora10g3 ~]# cat /etc/sysconfig/rawdevices # raw device bindings # format: <rawdev> <major> <minor> # <rawdev> <blockdev> # example: /dev/raw/raw1 /dev/sda1 # /dev/raw/raw2 8 5 # Oracle 10g RAC # OCR /dev/raw/raw1 /dev/sdb1 # Voting Disk /dev/raw/raw2 /dev/sdb2 # Data01 /dev/raw/raw3 /dev/sdb3 [root@ora10g3 ~]# [root@ora10g3 ~]# cat /etc/udev/rules.d/60-raw.rules # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. # Oracle 10g RAC KERNEL=="raw*",OWNER="grid",GROUP="oinstall",MODE="0660" [root@ora10g3 ~]# [root@ora10g3 ~]# service rawdevices status /dev/raw/raw1: bound to major 8, minor 17 /dev/raw/raw2: bound to major 8, minor 18 /dev/raw/raw3: bound to major 8, minor 19 [root@ora10g3 ~]# [root@ora10g3 ~]# ll /dev/raw/ total 0 crw-rw---- 1 root root 162, 1 Jan 3 04:45 raw1 crw-rw---- 1 root root 162, 2 Jan 3 04:45 raw2 crw-rw---- 1 root root 162, 3 Jan 3 04:45 raw3 [root@ora10g3 ~]# [root@ora10g3 ~]# chkconfig rawdevices --list rawdevices 0:off 1:off 2:off 3:on 4:on 5:on 6:off [root@ora10g3 ~]# |
4.需要在现有集群节点的/etc/hosts表中,添加对新增节点的IP与主机名关联;以及新增节点的/etc/hosts表中对于现有集群节点的描述:
因为集群各个节点之间都需要知道彼此的IP与主机名,所以新增的节点必须反映集群的各个节点的/etc/hosts表中。
于是,对于新增节点后的/etc/hosts表,需要保证它们都如下配置一般:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[root@ora10g3 ~]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. #127.0.0.1 localhost.localdomain localhost #::1 localhost6.localdomain6 localhost6 # Local host 127.0.0.1 localhost # Public 192.168.195.11 ora10g1 192.168.195.12 ora10g2 192.168.195.6 ora10g3 # Private 192.168.13.11 ora10g1-priv 192.168.13.12 ora10g2-priv 192.168.13.6 ora10g3-priv # Virtual 192.168.195.21 ora10g1-vip 192.168.195.22 ora10g2-vip 192.168.195.26 ora10g3-vip # Shared Storage 192.168.195.128 openfiler [root@ora10g3 ~]# |
5.添加现有集群节点与新增节点之间的SSH等价性关系:
已有集群节点:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
[root@ora10g1 ~]# hostname ora10g1 [root@ora10g1 ~]# su - grid [grid@ora10g1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3 15 The authenticity of host 'ora10g3 (192.168.195.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3,192.168.195.6' (RSA) to the list of known hosts. grid@ora10g3's password: Now try logging into the machine, with "ssh 'ora10g3'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3-priv 15 The authenticity of host 'ora10g3-priv (192.168.13.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3-priv,192.168.13.6' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g3-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g1 ~]$ exit logout [root@ora10g1 ~]# su - oracle [oracle@ora10g1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3 15 The authenticity of host 'ora10g3 (192.168.195.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3,192.168.195.6' (RSA) to the list of known hosts. oracle@ora10g3's password: Now try logging into the machine, with "ssh 'ora10g3'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3-priv 15 The authenticity of host 'ora10g3-priv (192.168.13.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3-priv,192.168.13.6' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g3-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g1 ~]$ exit logout [root@ora10g1 ~]# [root@ora10g1 ~]# su - grid [grid@ora10g1 ~]$ ssh ora10g3 date Sat Jan 3 04:57:55 PST 2015 [grid@ora10g1 ~]$ ssh ora10g3-priv date Sat Jan 3 04:58:02 PST 2015 [grid@ora10g1 ~]$ [grid@ora10g1 ~]$ exit logout [root@ora10g1 ~]# su - oracle [oracle@ora10g1 ~]$ ssh ora10g3 date Sat Jan 3 04:58:13 PST 2015 [oracle@ora10g1 ~]$ ssh ora10g3-priv date Sat Jan 3 04:58:18 PST 2015 [oracle@ora10g1 ~]$ [oracle@ora10g1 ~]$ exit logout [root@ora10g1 ~]# |
(上述,Xshell日志为节点一上的操作,节点二类似。)
新增节点:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
[root@ora10g3 ~]# hostname ora10g3 [root@ora10g3 ~]# su - grid [grid@ora10g3 ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: d4:15:f8:b9:b1:f5:6b:29:b3:ef:d0:35:8b:90:88:14 grid@ora10g3 [grid@ora10g3 ~]$ [grid@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g1 15 Now try logging into the machine, with "ssh 'ora10g1'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g2 15 The authenticity of host 'ora10g2 (192.168.195.12)' can't be established. RSA key fingerprint is ca:c6:da:d0:90:2d:28:dc:fb:9d:62:aa:4c:3f:de:ab. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g2,192.168.195.12' (RSA) to the list of known hosts. grid@ora10g2's password: Now try logging into the machine, with "ssh 'ora10g2'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3 15 The authenticity of host 'ora10g3 (192.168.195.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3,192.168.195.6' (RSA) to the list of known hosts. grid@ora10g3's password: Now try logging into the machine, with "ssh 'ora10g3'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g3 ~]$ [grid@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g1-priv 15 The authenticity of host 'ora10g1-priv (192.168.13.11)' can't be established. RSA key fingerprint is 86:1a:22:0f:2f:7c:ad:f1:f8:41:d7:bf:d0:e9:8c:67. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g1-priv,192.168.13.11' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g1-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g2-priv 15 The authenticity of host 'ora10g2-priv (192.168.13.12)' can't be established. RSA key fingerprint is ca:c6:da:d0:90:2d:28:dc:fb:9d:62:aa:4c:3f:de:ab. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g2-priv,192.168.13.12' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g2-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3-priv 15 The authenticity of host 'ora10g3-priv (192.168.13.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3-priv,192.168.13.6' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g3-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [grid@ora10g3 ~]$ [root@ora10g3 ~]# su - oracle [oracle@ora10g3 ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: 5f:07:ce:bd:94:e3:28:d9:70:8a:c5:0f:71:ff:95:4f oracle@ora10g3 [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g1 15 The authenticity of host 'ora10g1 (192.168.195.11)' can't be established. RSA key fingerprint is 86:1a:22:0f:2f:7c:ad:f1:f8:41:d7:bf:d0:e9:8c:67. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g1,192.168.195.11' (RSA) to the list of known hosts. oracle@ora10g1's password: Now try logging into the machine, with "ssh 'ora10g1'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g2 15 The authenticity of host 'ora10g2 (192.168.195.12)' can't be established. RSA key fingerprint is ca:c6:da:d0:90:2d:28:dc:fb:9d:62:aa:4c:3f:de:ab. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g2,192.168.195.12' (RSA) to the list of known hosts. oracle@ora10g2's password: Now try logging into the machine, with "ssh 'ora10g2'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3 15 The authenticity of host 'ora10g3 (192.168.195.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3,192.168.195.6' (RSA) to the list of known hosts. oracle@ora10g3's password: Now try logging into the machine, with "ssh 'ora10g3'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g1-priv 15 The authenticity of host 'ora10g1-priv (192.168.13.11)' can't be established. RSA key fingerprint is 86:1a:22:0f:2f:7c:ad:f1:f8:41:d7:bf:d0:e9:8c:67. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g1-priv,192.168.13.11' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g1-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g2-priv 15 The authenticity of host 'ora10g2-priv (192.168.13.12)' can't be established. RSA key fingerprint is ca:c6:da:d0:90:2d:28:dc:fb:9d:62:aa:4c:3f:de:ab. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g2-priv,192.168.13.12' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g2-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ ssh-copy-id -i .ssh/id_rsa.pub ora10g3-priv 15 The authenticity of host 'ora10g3-priv (192.168.13.6)' can't be established. RSA key fingerprint is 8d:81:f0:84:ad:f4:0f:01:88:a7:cd:ce:25:cc:f3:66. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora10g3-priv,192.168.13.6' (RSA) to the list of known hosts. Now try logging into the machine, with "ssh 'ora10g3-priv'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ exit logout [root@ora10g3 ~]# [root@ora10g3 ~]# su - grid [grid@ora10g3 ~]$ ssh ora10g1 date Sat Jan 3 05:05:32 PST 2015 [grid@ora10g3 ~]$ ssh ora10g2 date Sat Jan 3 05:05:36 PST 2015 [grid@ora10g3 ~]$ ssh ora10g3 date Sat Jan 3 05:05:39 PST 2015 [grid@ora10g3 ~]$ [grid@ora10g3 ~]$ ssh ora10g1-priv date Sat Jan 3 05:05:47 PST 2015 [grid@ora10g3 ~]$ ssh ora10g2-priv date Sat Jan 3 05:05:50 PST 2015 [grid@ora10g3 ~]$ ssh ora10g3-priv date Sat Jan 3 05:05:53 PST 2015 [grid@ora10g3 ~]$ [grid@ora10g3 ~]$ exit logout [root@ora10g3 ~]# [root@ora10g3 ~]# su - oracle [oracle@ora10g3 ~]$ ssh ora10g1 date Sat Jan 3 05:06:25 PST 2015 [oracle@ora10g3 ~]$ ssh ora10g2 date Sat Jan 3 05:06:30 PST 2015 [oracle@ora10g3 ~]$ ssh ora10g3 date Sat Jan 3 05:06:33 PST 2015 [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ ssh ora10g1-priv date Sat Jan 3 05:06:40 PST 2015 [oracle@ora10g3 ~]$ ssh ora10g2-priv date Sat Jan 3 05:06:43 PST 2015 [oracle@ora10g3 ~]$ ssh ora10g3-priv date Sat Jan 3 05:06:47 PST 2015 [oracle@ora10g3 ~]$ |
———————————————————————————————————————
至此,节点三在加入现有RAC集群前的准备工作就完成了。
RAC 10g中的集群增加节点分为两个部分:
1. Clusterware的节点增加
2. Database的节点增加
下面,先从Clusterware开始。
—————————————————————————————————————————
添加,Clusterware节点:
(OP:现有集群节点,节点一或节点二)
这里选择节点一执行增加节点的操作。
这个过程中,需要用到:Xmanager – Passive。
节点一:
设置DISPLAY环境变量:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@ora10g1 ~]# ping 192.168.195.1 -c 3 PING 192.168.195.1 (192.168.195.1) 56(84) bytes of data. 64 bytes from 192.168.195.1: icmp_seq=1 ttl=64 time=0.180 ms 64 bytes from 192.168.195.1: icmp_seq=2 ttl=64 time=0.188 ms 64 bytes from 192.168.195.1: icmp_seq=3 ttl=64 time=0.198 ms --- 192.168.195.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.180/0.188/0.198/0.017 ms [root@ora10g1 ~]# [root@ora10g1 ~]# export DISPLAY=192.168.195.1:0.0 [root@ora10g1 ~]# xhost + access control disabled, clients can connect from any host [root@ora10g1 ~]# [root@ora10g1 ~]# su - grid [grid@ora10g1 ~]$ xhost + access control disabled, clients can connect from any host [grid@ora10g1 ~]$ |
以CRS的安装用户登陆系统:
在我的环境中,我是通过grid安装Clusterware的。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[grid@ora10g1 ~]$ pwd /home/grid [grid@ora10g1 ~]$ cd $ORACLE_HOME [grid@ora10g1 grid]$ pwd /u01/app/10g/grid [grid@ora10g1 grid]$ cd oui/bin [grid@ora10g1 bin]$ ll total 236 -rwxrwxr-x 1 grid oinstall 167 Dec 19 22:30 addLangs.sh -rwxrwxr-x 1 grid oinstall 80 Dec 19 22:30 addNode.sh -rwxrwxr-x 1 grid oinstall 46448 Oct 16 2005 lsnodes -rwxr-xr-x 1 grid oinstall 399 Dec 19 22:30 ouica.sh drwxrwx--- 2 grid oinstall 4096 Dec 19 22:29 resource -rwxrwxr-x 1 grid oinstall 837 Dec 19 22:30 runConfig.sh -rwxrwxr-x 1 grid oinstall 163185 Jun 22 2005 runInstaller -rwxrwxr-x 1 grid oinstall 57 Dec 19 22:30 runInstaller.sh [grid@ora10g1 bin]$ [grid@ora10g1 bin]$ ./addNode.sh Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Oracle Universal Installer, Version 10.2.0.1.0 Production Copyright (C) 1999, 2005, Oracle. All rights reserved. (命令行:Hold...) |
上述的命令“addNode.sh”将拉起一个图形化界面,下面以截图的方式呈现细节:
在“下一步”正式安装之前,先看看节点三中Grid预设目录的当前状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@ora10g3 ~]# tree /u01/ /u01/ `-- app |-- 10g |-- grid `-- oracle 4 directories, 0 files [root@ora10g3 ~]# [root@ora10g3 ~]# du -sh /u01/ 20K /u01/ [root@ora10g3 ~]# [root@ora10g3 ~]# su - grid [grid@ora10g3 ~]$ env | grep ORACLE_BASE ORACLE_BASE=/u01/app/grid [grid@ora10g3 ~]$ env | grep ORACLE_HOME ORACLE_HOME=/u01/app/10g/grid [grid@ora10g3 ~]$ |
上面的节点一发起的安装结束的时候,grid用户设置的这两个目录:
/u01/app/grid
/u01/app/10g/grid
状态会发生改变。
回到节点一,在上述截图中,选择“Install”,正式开始安装:
等待进度完成。
不同的主机的性能差异会影响这个过程的实际执行时间。
然后,你会看到如下解脱实施的执行脚本的界面:
如上图所示:
首先,在节点三上执行脚本:orainstRoot.sh
1 2 3 4 5 6 7 8 9 10 11 12 |
[grid@ora10g3 ~]$ su - root Password: [root@ora10g3 ~]# hostname ora10g3 [root@ora10g3 ~]# whoami root [root@ora10g3 ~]# [root@ora10g3 ~]# /u01/app/grid/oraInventory/orainstRoot.sh Changing permissions of /u01/app/grid/oraInventory to 770. Changing groupname of /u01/app/grid/oraInventory to oinstall. The execution of the script is complete [root@ora10g3 ~]# |
然后,在节点一执行:rootaddnode.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[root@ora10g1 ~]# /u01/app/10g/grid/install/rootaddnode.sh clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Node ora10g3 is already assigned nodenum 3. Aborting: No configuration data has been changed. clscfg -add -nn nameA,numA,nameB,numB,... -pn privA,numA,privB,numB,... [-hn hostA,numA,hostB,numB,...] [-t p1,p2,p3,p4] -nn specifies nodenames in the same fashion as -nn in -install mode -pn specifies private interconnect names as -pn in -install mode -hn specifies hostnames in the same fashion as -hn in -install mode -t specifies port numbers to be used by CRS daemons on the new node(s) default ports: 49895,49896,49897,49898 WARNING: Using this tool may corrupt your cluster configuration. Do not use unless you positively know what you are doing. /u01/app/10g/grid/bin/srvctl add nodeapps -n ora10g3 -A ora10g3-vip/255.255.255.0/eth0 -o /u01/app/10g/grid [root@ora10g1 ~]# |
如上所示,就是执行成功的状态。
Possible Error:
在节点一执行:rootaddnode.sh的时候,你可能会遇到如下的错误:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[root@ora10g1 ~]# /u01/app/10g/grid/install/rootaddnode.sh clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Attempting to add 1 new nodes to the configuration Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 3: ora10g3 ora10g3-priv ora10g3 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. /u01/app/10g/grid/jdk/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory awk: (FILENAME=- FNR=1) fatal: attempt to access field -1 /u01/app/10g/grid/bin/srvctl add nodeapps -n ora10g3 -A ora10g3-vip// -o /u01/app/10g/grid /u01/app/10g/grid/jdk/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory [root@ora10g1 ~]# |
遇到这个错误的时候,你需要修改你的GRID软件的bin目录中的:srvctl与vipca的脚本:
1 2 3 4 5 6 7 8 9 |
[root@ora10g1 ~]# cat /u01/app/10g/grid/bin/srvctl | grep --color LD_ASSUME_KERNEL LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL [root@ora10g1 ~]# cat /u01/app/10g/grid/bin/vipca | grep --color LD_ASSUME_KERNEL LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL [root@ora10g1 ~]# |
如上代码所示,找到“LD_ASSUME_KERNEL”的声明,并在其后添加“unset LD_ASSUME_KERNEL”的语句。
然后,重新执行:rootaddnode.sh,即可。
最后,再回到节点三,执行:root.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
[root@ora10g3 ~]# /u01/app/10g/grid/root.sh WARNING: directory '/u01/app/10g' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. OCR LOCATIONS = /dev/raw/raw1 OCR backup directory '/u01/app/10g/grid/cdata/crs-me' does not exist. Creating now Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/10g' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: ora10g1 ora10g1-priv ora10g1 node 2: ora10g2 ora10g2-priv ora10g2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. ora10g1 ora10g2 ora10g3 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps /u01/app/10g/grid/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory [root@ora10g3 ~]# |
如上,代码所示:
你在节点三上执行脚本,一样可能会遇到“libpthread.so.0”的错误。
只不过,这时候的解决方法跟上述的方式有些许的不同,具体如下:
首先“LD_ASSUME_KERNEL”还是要被“unset”掉的:
1 2 3 4 5 6 7 8 9 10 |
[root@ora10g3 ~]# cat /u01/app/10g/grid/bin/srvctl | grep --color LD_ASSUME_KERNEL LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL [root@ora10g3 ~]# [root@ora10g3 ~]# cat /u01/app/10g/grid/bin/vipca | grep --color LD_ASSUME_KERNEL LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL [root@ora10g3 ~]# |
查看集群的网络配置:
1 2 3 4 5 6 7 8 9 10 11 |
[root@ora10g3 ~]# cd /u01/app/10g/grid/bin/ [root@ora10g3 bin]# [root@ora10g3 bin]# ll | grep --color oifcfg -rwxr-xr-x 1 grid oinstall 1674 Dec 19 22:30 oifcfg -rwxr-xr-x 1 grid oinstall 998694 Oct 20 2005 oifcfg.bin [root@ora10g3 bin]# [root@ora10g3 bin]# ./oifcfg getif eth0 192.168.195.0 global public eth0 192.168.13.0 global cluster_interconnect eth1 192.168.13.0 global cluster_interconnect [root@ora10g3 bin]# |
这里你可以看到,集群的网络的配置是不对的。
删掉它们:
1 2 3 4 5 6 |
[root@ora10g3 bin]# ./oifcfg delif -global eth0 [root@ora10g3 bin]# [root@ora10g3 bin]# ./oifcfg delif -global eth1 [root@ora10g3 bin]# [root@ora10g3 bin]# ./oifcfg getif [root@ora10g3 bin]# |
重建:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
[root@ora10g3 bin]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:8C:6E:00 inet addr:192.168.195.6 Bcast:192.168.195.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe8c:6e00/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:790134 errors:0 dropped:0 overruns:0 frame:0 TX packets:201741 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1123190830 (1.0 GiB) TX bytes:15968060 (15.2 MiB) eth0:1 Link encap:Ethernet HWaddr 00:0C:29:8C:6E:00 inet addr:192.168.195.26 Bcast:192.168.195.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:0C:29:8C:6E:0A inet addr:192.168.13.6 Bcast:192.168.13.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe8c:6e0a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12641 errors:0 dropped:0 overruns:0 frame:0 TX packets:8672 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8457631 (8.0 MiB) TX bytes:4758286 (4.5 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:12963 errors:0 dropped:0 overruns:0 frame:0 TX packets:12963 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14017385 (13.3 MiB) TX bytes:14017385 (13.3 MiB) [root@ora10g3 bin]# [root@ora10g3 bin]# ./oifcfg getif [root@ora10g3 bin]# [root@ora10g3 bin]# ./oifcfg setif -global eth0/192.168.195.0:public [root@ora10g3 bin]# ./oifcfg setif -global eth1/192.168.13.0:cluster_interconnect [root@ora10g3 bin]# [root@ora10g3 bin]# ./oifcfg getif eth0 192.168.195.0 global public eth1 192.168.13.0 global cluster_interconnect [root@ora10g3 bin]# [root@ora10g3 bin]# |
然后,重新跑一下“vipca”:
1 2 3 4 5 6 |
[root@ora10g3 bin]# export DISPLAY=192.168.195.1:0.0 [root@ora10g3 bin]# xhost + access control disabled, clients can connect from any host [root@ora10g3 bin]# ./vipca & [1] 8932 [root@ora10g3 bin]# |
如上图所示,这样子,root.sh脚本的报错就解决了。
你没有必要再去执行一遍。
(My Oracle Support:ID 414163.1,详细说明了这个root.sh执行过程中的BUG。:10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures))
然后,回到前面节点一的“addNode.sh”的窗体,在脚本执行的那里选择“OK”,便可继续:
这样,RAC的增加节点中,关于Clusterware的节点增加就完成了。
你可以查看下当前的CRS资源状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[root@ora10g3 bin]# hostname ora10g3 [root@ora10g3 bin]# pwd /u01/app/10g/grid/bin [root@ora10g3 bin]# ./crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.dblogic.db application 0/1 0/1 ONLINE ONLINE ora10g1 ora....c1.inst application 0/5 0/0 ONLINE ONLINE ora10g1 ora....c2.inst application 0/5 0/0 ONLINE ONLINE ora10g2 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE ora10g1 ora....G1.lsnr application 0/5 0/0 ONLINE ONLINE ora10g1 ora....0g1.gsd application 0/5 0/0 ONLINE ONLINE ora10g1 ora....0g1.ons application 0/3 0/0 ONLINE ONLINE ora10g1 ora....0g1.vip application 0/0 0/0 ONLINE ONLINE ora10g1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE ora10g2 ora....G2.lsnr application 0/5 0/0 ONLINE ONLINE ora10g2 ora....0g2.gsd application 0/5 0/0 ONLINE ONLINE ora10g2 ora....0g2.ons application 0/3 0/0 ONLINE ONLINE ora10g2 ora....0g2.vip application 0/0 0/0 ONLINE ONLINE ora10g2 ora....0g3.gsd application 0/5 0/0 ONLINE ONLINE ora10g3 ora....0g3.ons application 0/3 0/0 ONLINE ONLINE ora10g3 ora....0g3.vip application 0/0 0/0 ONLINE ONLINE ora10g3 [root@ora10g3 bin]# [root@ora10g3 bin]# ./crs_stat -t -v | grep --color ora10g3 ora....0g3.gsd application 0/5 0/0 ONLINE ONLINE ora10g3 ora....0g3.ons application 0/3 0/0 ONLINE ONLINE ora10g3 ora....0g3.vip application 0/0 0/0 ONLINE ONLINE ora10g3 [root@ora10g3 bin]# |
你可以看到,节点三的资源已经被加入集群。
看看Grid安装目录的变化(和上文中,“addNode.sh”开始前的状态做比较):
1 2 3 |
[root@ora10g3 bin]# du -sh /u01/ 816M /u01/ [root@ora10g3 bin]# |
————————————————————————————————————————————
下面,为Database增加节点。
在做该部分操作的时候,应该以Database软件的安装用户进行。
和CRS的增加节点一样,可以在现有节点的任意一个节点操作。
这里,我选择节点一操作。
而我的Database安装用户为oracle。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@ora10g1 ~]# hostname ora10g1 [root@ora10g1 ~]# su - oracle [oracle@ora10g1 ~]$ whoami oracle [oracle@ora10g1 ~]$ [oracle@ora10g1 ~]$ cd $ORACLE_HOME/oui/bin [oracle@ora10g1 bin]$ pwd /u01/app/oracle/product/10g/dbhome_1/oui/bin [oracle@ora10g1 bin]$ ll total 236 -rwxr-xr-x 1 oracle oinstall 186 Dec 19 23:16 addLangs.sh -rwxr-xr-x 1 oracle oinstall 99 Dec 19 23:16 addNode.sh -rwxr-xr-x 1 oracle oinstall 46448 Oct 16 2005 lsnodes -rwxr-xr-x 1 oracle oinstall 532 Dec 19 23:16 ouica.sh drwxr-x--- 2 oracle oinstall 4096 Dec 19 23:15 resource -rwxr-xr-x 1 oracle oinstall 1008 Dec 19 23:16 runConfig.sh -rwxr-xr-x 1 oracle oinstall 163185 Jun 22 2005 runInstaller -rwxr-xr-x 1 oracle oinstall 76 Dec 19 23:16 runInstaller.sh [oracle@ora10g1 bin]$ |
一样的,运行“addNode.sh”即可:
1 2 3 4 5 6 7 8 9 10 |
[oracle@ora10g1 bin]$ ./addNode.sh & [1] 5924 [oracle@ora10g1 bin]$ Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Oracle Universal Installer, Version 10.2.0.1.0 Production Copyright (C) 1999, 2005, Oracle. All rights reserved. [oracle@ora10g1 bin]$ |
有时候,你可能在这一步遇到如下的错误:
1 2 3 4 5 6 |
[oracle@ora10g1 bin]$ ./addNode.sh & [1] 5262 [oracle@ora10g1 bin]$ You do not have sufficient permissions to access the inventory '/u01/app/grid/oraInventory/orainstRoot.sh'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied [1]+ Exit 255 ./addNode.sh [oracle@ora10g1 bin]$ |
这是因为你对“/u01/app/grid/oraInventory/”该目录的权限不足导致的。
你需要做出如下的修改,即可修正该错误:
1 2 |
[root@ora10g1 bin]# chmod -R 777 /u01/app/grid/oraInventory/ [root@ora10g1 bin]# |
这里,我遇到了一个意外:
根据告警信息,我可以在整个“addnode.sh”执行完成后,在失败的节点(这里,失败的节点是:节点二)上重新执行命令。
查看日志文件:/u01/app/grid/oraInventory/logs/addNodeActions2015-01-03_06-49-37AM.log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
Oracle Partitioning 10.2.0.1.0 Enterprise Edition Options 10.2.0.1.0 INFO: Running command '/u01/app/oracle/product/10g/dbhome_1/oui/bin/../bin/runInstaller -paramFile /u01/app/oracle/product/10g/dbhome_1/oui/bin/../clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false "INVENTORY_LOCATION=/u01/app/grid/oraInventory" LOCAL_NODE=ora10g2 -remoteInvocation -invokingNodeName ora10g1 -logFilePath "/u01/app/grid/oraInventory/logs" -timestamp 2015-01-03_06-49-37AM' on the nodes 'ora10g2'. INFO: Invoking OUI on cluster nodes ora10g2 INFO: /u01/app/oracle/product/10g/dbhome_1/oui/bin/../bin/runInstaller -paramFile /u01/app/oracle/product/10g/dbhome_1/oui/bin/../clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false "INVENTORY_LOCATION=/u01/app/grid/oraInventory" LOCAL_NODE=ora10g2 -remoteInvocation -invokingNodeName ora10g1 -logFilePath "/u01/app/grid/oraInventory/logs" -timestamp 2015-01-03_06-49-37AM SEVERE: oracle.sysman.oii.oiip.oiipg.OiipgRemoteOpsException: Error occured while trying to run Unix command /u01/app/oracle/product/10g/dbhome_1/oui/bin/../bin/runInstaller -paramFile /u01/app/oracle/product/10g/dbhome_1/oui/bin/../clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false "INVENTORY_LOCATION=/u01/app/grid/oraInventory" LOCAL_NODE=ora10g2 -remoteInvocation -invokingNodeName ora10g1 -logFilePath "/u01/app/grid/oraInventory/logs" -timestamp 2015-01-03_06-49-37AM on nodes ora10g2. [You do not have sufficient permissions to access the inventory '/u01/app/grid/oraInventory/logs'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied :failed] at oracle.sysman.oii.oiip.oiipg.OiipgClusterRunCmd.runCmdOnUnix(OiipgClusterRunCmd.java:262) at oracle.sysman.oii.oiip.oiipg.OiipgClusterRunCmd.runAnyCmdOnNodes(OiipgClusterRunCmd.java:355) at oracle.sysman.oii.oiip.oiipg.OiipgClusterRunCmd.runCmd(OiipgClusterRunCmd.java:300) at oracle.sysman.oii.oiic.OiicBaseInventoryApp.runRemoteInvOpCmd(OiicBaseInventoryApp.java:241) at oracle.sysman.oii.oiic.OiicUpdateNodeList.clsCmdUpdateNodeList(OiicUpdateNodeList.java:203) at oracle.sysman.oii.oiif.oiifw.OiifwClusterSaveInventoryWCCE.doOperation(OiifwClusterSaveInventoryWCCE.java:358) at oracle.sysman.oii.oiif.oiifb.OiifbCondIterator.iterate(OiifbCondIterator.java:171) at oracle.sysman.oii.oiif.oiifw.OiifwAddNodePhaseWCDE.doOperation(OiifwAddNodePhaseWCDE.java:296) at oracle.sysman.oii.oiif.oiifb.OiifbCondIterator.iterate(OiifbCondIterator.java:171) at oracle.sysman.oii.oiic.OiicPullSession.doOperation(OiicPullSession.java:1273) at oracle.sysman.oii.oiic.OiicSessionWrapper.doOperation(OiicSessionWrapper.java:289) at oracle.sysman.oii.oiic.OiicInstaller.run(OiicInstaller.java:546) at oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:929) at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:866) Caused by: oracle.ops.mgmt.cluster.ClusterException: You do not have sufficient permissions to access the inventory '/u01/app/grid/oraInventory/logs'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied :failed at oracle.ops.mgmt.cluster.ClusterCmd.runCmd(ClusterCmd.java:1960) at oracle.sysman.oii.oiip.oiipg.OiipgClusterRunCmd.runCmdOnUnix(OiipgClusterRunCmd.java:256) ... 13 more INFO: Running command '/u01/app/oracle/product/10g/dbhome_1/oui/bin/../bin/runInstaller -paramFile /u01/app/oracle/product/10g/dbhome_1/oui/bin/../clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false "INVENTORY_LOCATION=/u01/app/grid/oraInventory" LOCAL_NODE=ora10g3 -remoteInvocation -invokingNodeName ora10g1 -logFilePath "/u01/app/grid/oraInventory/logs" -timestamp 2015-01-03_06-49-37AM' on the nodes 'ora10g3'. INFO: Invoking OUI on cluster nodes ora10g3 INFO: /u01/app/oracle/product/10g/dbhome_1/oui/bin/../bin/runInstaller -paramFile /u01/app/oracle/product/10g/dbhome_1/oui/bin/../clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false "INVENTORY_LOCATION=/u01/app/grid/oraInventory" LOCAL_NODE=ora10g3 -remoteInvocation -invokingNodeName ora10g1 -logFilePath "/u01/app/grid/oraInventory/logs" -timestamp 2015-01-03_06-49-37AM SEVERE: Remote 'UpdateNodeList' failed on nodes: 'ora10g2'. Refer to '/u01/app/grid/oraInventory/logs/addNodeActions2015-01-03_06-49-37AM.log' for details. You can manually re-run the following command on the failed nodes after the installation: /u01/app/oracle/product/10g/dbhome_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false "INVENTORY_LOCATION=/u01/app/grid/oraInventory" LOCAL_NODE=<node on which command is to be run>. |
通过查看日志文件,你可以找到发生上述问题的原因:“Caused by: oracle.ops.mgmt.cluster.ClusterException: You do not have sufficient permissions to access the inventory ‘/u01/app/grid/oraInventory/logs’. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied :failed”
所以,在这里,跳过该错误,在后面赋予对目录:/u01/app/grid/oraInventory正确的访问权限,并且重新执行命令:
/u01/app/oracle/product/10g/dbhome_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false “INVENTORY_LOCATION=/u01/app/grid/oraInventory” LOCAL_NODE=
即可。
回到“addNode.sh”的oui,在Error的告警窗口选择“ok”,让安装进程继续跑:
需要跑脚本了:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
[root@ora10g3 bin]# hostname ora10g3 [root@ora10g3 bin]# [root@ora10g3 bin]# /u01/app/oracle/product/10g/dbhome_1/root.sh Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/10g/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. [root@ora10g3 bin]# |
修正上述安装过程中,在节点二上的执行异常:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@ora10g2 ~]# hostname ora10g2 [root@ora10g2 ~]# ll -d /u01/app/grid/oraInventory/ drwxrwx--- 5 grid oinstall 4096 Jan 3 05:52 /u01/app/grid/oraInventory/ [root@ora10g2 ~]# [root@ora10g2 ~]# chmod -R 777 /u01/app/grid/oraInventory/ [root@ora10g2 ~]# [root@ora10g2 ~]# su - oracle [oracle@ora10g2 ~]$ export DISPLAY=192.168.195.1:0.0 [oracle@ora10g2 ~]$ xhost + access control disabled, clients can connect from any host [oracle@ora10g2 ~]$ [oracle@ora10g2 ~]$ /u01/app/oracle/product/10g/dbhome_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10g/dbhome_1 CLUSTER_NODES=ora10g1,ora10g2,ora10g3 CRS=false "INVENTORY_LOCATION=/u01/app/grid/oraInventory" LOCAL_NODE=ora10g2 Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. 'UpdateNodeList' was successful. [oracle@ora10g2 ~]$ |
至此,节点二的问题已修复。
————————————————————————————————————————————————————
这样,节点三的数据库软件的扩展节点就完成了。
下面是对已经在运行的数据库实例的节点增加。
一、监听器
在为集群监听器增加节点之前,已有节点的监听器状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
[oracle@ora10g1 bin]$ hostname ora10g1 [oracle@ora10g1 bin]$ whoami oracle [oracle@ora10g1 bin]$ [oracle@ora10g1 bin]$ cat $ORACLE_HOME/network/admin/listener.ora # listener.ora.ora10g1 Network Configuration File: /u01/app/oracle/product/10g/dbhome_1/network/admin/listener.ora.ora10g1 # Generated by Oracle configuration tools. LISTENER_ORA10G1 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.195.11)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) ) [oracle@ora10g1 bin]$ [oracle@ora10g1 bin]$ cat $ORACLE_HOME/network/admin/tnsnames.ora # tnsnames.ora Network Configuration File: /u01/app/oracle/product/10g/dbhome_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. LISTENERS_DBLOGIC = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g2-vip)(PORT = 1521)) ) DBLOGIC2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g2-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dblogic) (INSTANCE_NAME = dblogic2) ) ) DBLOGIC1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dblogic) (INSTANCE_NAME = dblogic1) ) ) DBLOGIC = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g2-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dblogic) ) ) [oracle@ora10g1 bin]$ [oracle@ora10g1 bin]$ lsnrctl status LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 03-JAN-2015 07:21:15 Copyright (c) 1991, 2005, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER_ORA10G1 Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production Start Date 26-DEC-2014 17:29:28 Uptime 7 days 13 hr. 51 min. 46 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/10g/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/product/10g/dbhome_1/network/log/listener_ora10g1.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.195.21)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.195.11)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status BLOCKED, has 1 handler(s) for this service... Service "+ASM_XPT" has 1 instance(s). Instance "+ASM1", status BLOCKED, has 1 handler(s) for this service... Service "dblogic" has 2 instance(s). Instance "dblogic1", status READY, has 2 handler(s) for this service... Instance "dblogic2", status READY, has 1 handler(s) for this service... Service "dblogicXDB" has 2 instance(s). Instance "dblogic1", status READY, has 1 handler(s) for this service... Instance "dblogic2", status READY, has 1 handler(s) for this service... Service "dblogic_XPT" has 2 instance(s). Instance "dblogic1", status READY, has 2 handler(s) for this service... Instance "dblogic2", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@ora10g1 bin]$ |
使用NETCA工具配置:
1 2 3 4 5 |
[oracle@ora10g1 bin]$ netca netca netca_deinst.sh [oracle@ora10g1 bin]$ netca & [1] 16383 [oracle@ora10g1 bin]$ |
在这个过程中,图形化可能看不到具体的进度。
但是命令行却可以反映出正在配置哪个节点的监听器状态信息,如下:
1 2 3 4 5 6 7 8 9 10 11 12 |
[oracle@ora10g1 bin]$ Configuring Listener:LISTENER ora10g1... ora10g2... ora10g3... Listener configuration complete. Oracle Net Services configuration successful. The exit code is 0 [1]+ Done netca [oracle@ora10g1 bin]$ |
节点三的监听目录中会被写入监听配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
[root@ora10g3 bin]# su - oracle [oracle@ora10g3 ~]$ cat $ORACLE_HOME/network/admin/listener.ora # listener.ora.ora10g3 Network Configuration File: /u01/app/oracle/product/10g/dbhome_1/network/admin/listener.ora.ora10g3 # Generated by Oracle configuration tools. LISTENER_ORA10G3 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g3-vip)(PORT = 1521)(IP = FIRST)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.195.6)(PORT = 1521)(IP = FIRST)) ) ) LISTENER_ORA10G1 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.195.11)(PORT = 1521)(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) ) SID_LIST_LISTENER_ORA10G3 = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10g/dbhome_1) (PROGRAM = extproc) ) ) [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora # tnsnames.ora Network Configuration File: /u01/app/oracle/product/10g/dbhome_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. LISTENERS_DBLOGIC = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g2-vip)(PORT = 1521)) ) DBLOGIC2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g2-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dblogic) (INSTANCE_NAME = dblogic2) ) ) DBLOGIC1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dblogic) (INSTANCE_NAME = dblogic1) ) ) DBLOGIC = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g1-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = ora10g2-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dblogic) ) ) [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ lsnrctl status LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 03-JAN-2015 07:29:23 Copyright (c) 1991, 2005, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER_ORA10G3 Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production Start Date 03-JAN-2015 07:24:19 Uptime 0 days 0 hr. 5 min. 4 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/10g/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/product/10g/dbhome_1/network/log/listener_ora10g3.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.195.26)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.195.6)(PORT=1521))) Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully [oracle@ora10g3 ~]$ [oracle@ora10g3 ~]$ |
Oracle实例增加节点。
这一部分,需要通过DBCA工具完成。
在节点一上运行“dbca”,调出该工具:
1 2 3 |
[oracle@ora10g1 bin]$ dbca & [1] 28995 [oracle@ora10g1 bin]$ |
你可能会遇到这样的提示:
这时候,你可以查一下,自己的ASM的空间使用情况:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[oracle@ora10g1 bin]$ ps -ef | grep pmon oracle 5336 1 0 Jan02 ? 00:00:00 asm_pmon_+ASM1 oracle 6048 1 0 Jan02 ? 00:00:00 ora_pmon_dblogic1 oracle 7196 2980 0 07:38 pts/2 00:00:00 grep pmon [oracle@ora10g1 bin]$ export ORACLE_SID=+ASM1 [oracle@ora10g1 bin]$ asmcmd ASMCMD> ASMCMD> lsdg State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name MOUNTED EXTERN N N 512 4096 1048576 18542 17104 0 17104 0 DATA01/ ASMCMD> ASMCMD> exit [oracle@ora10g1 bin]$ [oracle@ora10g1 bin]$ |
你还可能在接下来的步骤里遇到这样的错误:
这里先选择“ok”,完成安装进度,稍后做出解答:
关于上面DBCA中,数据库实例在节点三上启动失败的问题,是由于节点三本地没有与实例相关的日志目录结构导致的。
具体如下:
首先看看节点一上的数据库的状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
[oracle@ora10g1 bin]$ env | grep SID ORACLE_SID=+ASM1 [oracle@ora10g1 bin]$ ps -ef | grep pmon oracle 5336 1 0 Jan02 ? 00:00:00 asm_pmon_+ASM1 oracle 6048 1 0 Jan02 ? 00:00:00 ora_pmon_dblogic1 oracle 19032 2980 0 07:47 pts/2 00:00:00 grep pmon [oracle@ora10g1 bin]$ export ORACLE_SID=dblogic1 [oracle@ora10g1 bin]$ env | grep SID ORACLE_SID=dblogic1 [oracle@ora10g1 bin]$ [oracle@ora10g1 bin]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Sat Jan 3 07:47:53 2015 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options SQL> select instance_name,status from v$instance; INSTANCE_NAME STATUS ---------------- ------------ dblogic1 OPEN SQL> SQL> show parameter dump NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ background_core_dump string partial background_dump_dest string /u01/app/oracle/admin/dblogic/ bdump core_dump_dest string /u01/app/oracle/admin/dblogic/ cdump max_dump_file_size string UNLIMITED shadow_core_dump string partial user_dump_dest string /u01/app/oracle/admin/dblogic/ udump SQL> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options [oracle@ora10g1 bin]$ [oracle@ora10g1 bin]$ cd /u01/app/oracle/admin/dblogic/ [oracle@ora10g1 dblogic]$ ll total 28 drwxr-xr-x 2 oracle oinstall 4096 Jan 3 07:47 adump drwxr-xr-x 10 oracle oinstall 4096 Jan 3 07:34 bdump drwxr-xr-x 2 oracle oinstall 4096 Jan 3 07:34 cdump drwxr-xr-x 2 oracle oinstall 4096 Jan 3 07:34 dpdump drwxr-xr-x 2 oracle oinstall 4096 Jan 3 07:34 hdump drwxr-xr-x 2 oracle oinstall 4096 Jan 3 07:34 pfile drwxr-xr-x 2 oracle oinstall 4096 Jan 3 07:34 udump [oracle@ora10g1 dblogic]$ |
——————————————————————————————————————————————
To be Continue。