Heartbeat(EPEL)+ Oracle DB 10g
环境:
Linux HA:双节点(节点一:192.168.56.140;节点二:192.168.56.141)
虚拟化支持:VMware Workstation 11
操作系统版本:RHEL 5U10
数据库:Oracle DB 10g
存储:Openfiler 2.99(IP:192.168.79.130:446)
网卡:双网卡(公共网络:192.168.56.X;私有网络:192.168.79.X)
——————
准备虚拟机:
(通用配置)
文件:/etc/hosts
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
[root@h1 ~]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. #127.0.0.1 localhost.localdomain localhost #::1 localhost6.localdomain6 localhost6 # Local 127.0.0.1 localhost # Pub 192.168.56.140 h1 192.168.56.141 h2 # Scan IP 192.168.56.150 orcl # Priv 192.168.79.140 h1-priv 192.168.79.141 h2-priv # Storage 192.168.79.130 openfiler [root@h1 ~]# |
注意:网卡IP设置为静态(static;none),最好不要是动态分配(DHCP)。
防火墙与SELinux:
1 2 3 4 5 |
[root@h1 ~]# sestatus SELinux status: disabled [root@h1 ~]# service iptables status Firewall is stopped. [root@h1 ~]# |
配置YUM:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
[root@h1 ~]# mkdir /{software,iso,temp_me,backup} [root@h1 ~]# mv /etc/yum.repos.d/* /backup [root@h1 ~]# [root@h1 ~]# cat /etc/yum.repos.d/iso.repo [Server] name=Server baseurl=file:///iso/Server gpgcheck=0 [VT] name=VT baseurl=file:///iso/VT gpgcheck=0 [Cluster] name=Cluster baseurl=file:///iso/Cluster gpgchech=0 [ClusterStorage] name=ClusterStorage baseurl=file:///iso/ClusterStorage gpgcheck=0 [root@h1 ~]# [root@h1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 15G 2.5G 12G 19% / /dev/sda1 289M 17M 258M 7% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm [root@h1 ~]# [root@h1 ~]# mount /dev/sr0 /iso mount: block device /dev/sr0 is write-protected, mounting read-only [root@h1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 15G 2.5G 12G 19% / /dev/sda1 289M 17M 258M 7% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sr0 4.1G 4.1G 0 100% /iso [root@h1 ~]# yum repolist Loaded plugins: product-id, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Cluster | 1.5 kB 00:00 Cluster/primary | 6.1 kB 00:00 Cluster 32/32 ClusterStorage | 1.5 kB 00:00 ClusterStorage/primary | 8.6 kB 00:00 ClusterStorage 39/39 Server | 1.5 kB 00:00 Server/primary | 942 kB 00:00 Server 3358/3358 VT | 1.3 kB 00:00 VT/primary | 19 kB 00:00 VT 59/59 repo id repo name status Cluster Cluster 32 ClusterStorage ClusterStorage 39 Server Server 3,358 VT VT 59 repolist: 3,488 [root@h1 ~]# |
安装软件包:
(针对Oracle Database)
yum install -y binutils* compat-db setarch compat-libstdc++-* control-center elfutils-libelf-* elfutils-libelf-devel-* elfutils-libelf-devel-static-* gcc-* gcc-c++-* glibc-* glibc-common-* glibc-devel-* glibc-headers-* ksh libaio-* libaio-devel-* libgcc libstdc++-* libstdc++-devel make sysstat unixODBC unixODBC-devel compat-libstdc++-33.i386 compat-libstdc++-296.i386 glibc.i686 glibc-devel.i386 unixODBC.i386 unixODBC-devel.i386 libaio.i386 libaio-devel.i386 libgcc.i386 libstdc++.i386 libstdc++-devel.i386 libaio-devel.i386 libXp.i386
(针对共享存储)
yum install -y iscsi* scsi*
(针对Heartbeat)
yum install -y createrepo
配置文件的修改:
(针对Oracle)
文件:/etc/sysctl.conf
1 2 3 4 5 6 7 8 9 10 |
kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 |
立即应用:sysctl -p
文件:/etc/pam.d/login
1 2 |
session required /lib/security/pam_limits.so session required pam_limits.so |
文件:/etc/security/limits.conf
1 2 3 4 |
* soft nproc 2047 * hard nproc 16384 * soft nofile 1024 * hard nofile 65536 |
(针对Oracle)
创建用户和组:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@h1 ~]# groupadd oinstall [root@h1 ~]# groupadd dba [root@h1 ~]# useradd -g oinstall -G dba oracle [root@h1 ~]# [root@h1 ~]# id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba) [root@h1 ~]# passwd oracle Changing password for user oracle. New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: passwd: all authentication tokens updated successfully. [root@h1 ~]# |
(针对Oracle)
系统用户“oracle”环境变量:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[root@h1 ~]# su - oracle [oracle@h1 ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH export ORACLE_BASE=/u01/app export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1 export ORACLE_SID=elis10 export ORACLE_TERM=xterm export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/usr/X11R6/lib64/ export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib [oracle@h1 ~]$ [oracle@h1 ~]$ env | grep --color ORA ORACLE_SID=elis10 ORACLE_BASE=/u01/app ORACLE_TERM=xterm ORACLE_HOME=/u01/app/product/10.2.0/db_1 [oracle@h1 ~]$ |
(针对Oracle)
创建目录结构:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@h1 ~]# mkdir /u01 [root@h1 ~]# chown -R oracle.oinstall /u01 [root@h1 ~]# su - oracle [oracle@h1 ~]$ mkdir -p $ORACLE_HOME [oracle@h1 ~]$ tree /u01/ /u01/ `-- app `-- product `-- 10.2.0 `-- db_1 4 directories, 0 files [oracle@h1 ~]$ |
——————
存储配置:
Openfiler管理页面:https://192.168.79.130:446/
共享存储:
大小:20GB
iSCSI LUN:iqn.2006-01.com.openfiler:tsn.08269750791b
节点主机的操作系统的存储方面的配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
[root@h1 ~]# fdisk -l Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 38 305203+ 83 Linux /dev/sda2 39 1967 15494692+ 83 Linux /dev/sda3 1968 2610 5164897+ 82 Linux swap / Solaris [root@h1 ~]# [root@h1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler 192.168.79.130:3260,1 iqn.2006-01.com.openfiler:tsn.08269750791b [root@h1 ~]# [root@h1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.08269750791b -p openfiler -l Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.08269750791b, portal: 192.168.79.130,3260] (multiple) Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.08269750791b, portal: 192.168.79.130,3260] successful. [root@h1 ~]# [root@h1 ~]# fdisk -l Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 38 305203+ 83 Linux /dev/sda2 39 1967 15494692+ 83 Linux /dev/sda3 1968 2610 5164897+ 82 Linux swap / Solaris Disk /dev/sdb: 20.4 GB, 20468203520 bytes 64 heads, 32 sectors/track, 19520 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk /dev/sdb doesn't contain a valid partition table [root@h1 ~]# |
划分磁盘:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
[root@h1 ~]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 19520. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): p Disk /dev/sdb: 20.4 GB, 20468203520 bytes 64 heads, 32 sectors/track, 19520 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-19520, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-19520, default 19520): Using default value 19520 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@h1 ~]# [root@h1 ~]# partprobe /dev/sdb [root@h1 ~]# [root@h1 ~]# fdisk -l /dev/sdb Disk /dev/sdb: 20.4 GB, 20468203520 bytes 64 heads, 32 sectors/track, 19520 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 19520 19988464 83 Linux [root@h1 ~]# |
——————
安装Heartbeat。
两种方式:
1. 利用以前的EPEL软件包制作本地YUM源。
2. 虚拟机联网,通过EPEL安装。
第一种方式:
将Heartbeat的软件包上传服务器:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@h1 heartbeat]# pwd /software/heartbeat [root@h1 heartbeat]# ll total 3964 -rw-r--r-- 1 root root 1637449 Apr 14 2014 heartbeat-2.1.4-11.el5.i386.rpm -rw-r--r-- 1 root root 1691714 Apr 14 2014 heartbeat-2.1.4-11.el5.x86_64.rpm -rw-r--r-- 1 root root 92173 Apr 14 2014 heartbeat-pils-2.1.4-11.el5.i386.rpm -rw-r--r-- 1 root root 93297 Apr 14 2014 heartbeat-pils-2.1.4-11.el5.x86_64.rpm -rw-r--r-- 1 root root 179298 Apr 14 2014 heartbeat-stonith-2.1.4-11.el5.i386.rpm -rw-r--r-- 1 root root 190686 Apr 14 2014 heartbeat-stonith-2.1.4-11.el5.x86_64.rpm -rw-r--r-- 1 root root 64252 Apr 14 2014 libnet-1.1.6-7.el5.i386.rpm -rw-r--r-- 1 root root 63324 Apr 14 2014 libnet-1.1.6-7.el5.x86_64.rpm [root@h1 heartbeat]# |
createrepo:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@h1 software]# createrepo heartbeat/ 8/8 - heartbeat-stonith-2.1.4-11.el5.x86_64.rpm Saving Primary metadata Saving file lists metadata Saving other metadata [root@h1 software]# [root@h1 software]# ls heartbeat/ heartbeat-2.1.4-11.el5.i386.rpm heartbeat-stonith-2.1.4-11.el5.x86_64.rpm heartbeat-2.1.4-11.el5.x86_64.rpm libnet-1.1.6-7.el5.i386.rpm heartbeat-pils-2.1.4-11.el5.i386.rpm libnet-1.1.6-7.el5.x86_64.rpm heartbeat-pils-2.1.4-11.el5.x86_64.rpm repodata heartbeat-stonith-2.1.4-11.el5.i386.rpm [root@h1 software]# |
写入YUM的repo文件中:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
[root@h1 software]# cat /etc/yum.repos.d/iso.repo [Server] name=Server baseurl=file:///iso/Server gpgcheck=0 [VT] name=VT baseurl=file:///iso/VT gpgcheck=0 [Cluster] name=Cluster baseurl=file:///iso/Cluster gpgchech=0 [ClusterStorage] name=ClusterStorage baseurl=file:///iso/ClusterStorage gpgcheck=0 [Heartbeat] name=Heartbeat baseurl=file:///software/heartbeat gpgcheck=0 [root@h1 software]# |
刷新YUM缓存:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@h1 software]# yum repolist Loaded plugins: product-id, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Cluster | 1.5 kB 00:00 ClusterStorage | 1.5 kB 00:00 Heartbeat | 951 B 00:00 Heartbeat/primary | 4.5 kB 00:00 Heartbeat 8/8 Server | 1.5 kB 00:00 VT | 1.3 kB 00:00 repo id repo name status Cluster Cluster 32 ClusterStorage ClusterStorage 39 Heartbeat Heartbeat 8 Server Server 3,358 VT VT 59 repolist: 3,496 [root@h1 software]# |
安装Heartbeat:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
[root@h1 software]# yum install heartbeat Loaded plugins: product-id, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package heartbeat.i386 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libpils.so.1 for package: heartbeat --> Processing Dependency: libnet.so.1 for package: heartbeat --> Processing Dependency: libltdl.so.3 for package: heartbeat --> Processing Dependency: libstonith.so.1 for package: heartbeat ---> Package heartbeat.x86_64 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libltdl.so.3()(64bit) for package: heartbeat --> Processing Dependency: libstonith.so.1()(64bit) for package: heartbeat --> Processing Dependency: libnet.so.1()(64bit) for package: heartbeat --> Processing Dependency: libpils.so.1()(64bit) for package: heartbeat --> Running transaction check ---> Package heartbeat-pils.i386 0:2.1.4-11.el5 set to be updated ---> Package heartbeat-pils.x86_64 0:2.1.4-11.el5 set to be updated ---> Package heartbeat-stonith.i386 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libopenhpi.so.2 for package: heartbeat-stonith --> Processing Dependency: libnetsnmp.so.10 for package: heartbeat-stonith ---> Package heartbeat-stonith.x86_64 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libopenhpi.so.2()(64bit) for package: heartbeat-stonith ---> Package libnet.i386 0:1.1.6-7.el5 set to be updated ---> Package libnet.x86_64 0:1.1.6-7.el5 set to be updated ---> Package libtool-ltdl.i386 0:1.5.22-7.el5_4 set to be updated ---> Package libtool-ltdl.x86_64 0:1.5.22-7.el5_4 set to be updated --> Running transaction check ---> Package net-snmp-libs.i386 1:5.3.2.2-20.el5 set to be updated ---> Package openhpi-libs.i386 0:2.14.0-5.el5 set to be updated ---> Package openhpi-libs.x86_64 0:2.14.0-5.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ======================================================================================================= Package Arch Version Repository Size ======================================================================================================= Installing: heartbeat i386 2.1.4-11.el5 Heartbeat 1.6 M heartbeat x86_64 2.1.4-11.el5 Heartbeat 1.6 M Installing for dependencies: heartbeat-pils i386 2.1.4-11.el5 Heartbeat 90 k heartbeat-pils x86_64 2.1.4-11.el5 Heartbeat 91 k heartbeat-stonith i386 2.1.4-11.el5 Heartbeat 175 k heartbeat-stonith x86_64 2.1.4-11.el5 Heartbeat 186 k libnet i386 1.1.6-7.el5 Heartbeat 63 k libnet x86_64 1.1.6-7.el5 Heartbeat 62 k libtool-ltdl i386 1.5.22-7.el5_4 Server 37 k libtool-ltdl x86_64 1.5.22-7.el5_4 Server 38 k net-snmp-libs i386 1:5.3.2.2-20.el5 Server 1.3 M openhpi-libs i386 2.14.0-5.el5 Server 157 k openhpi-libs x86_64 2.14.0-5.el5 Server 168 k Transaction Summary ======================================================================================================= Install 13 Package(s) Upgrade 0 Package(s) Total download size: 5.5 M Is this ok [y/N]: y Downloading Packages: ------------------------------------------------------------------------------------------------------- Total 1.1 GB/s | 5.5 MB 00:00 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : libtool-ltdl 1/13 Installing : libtool-ltdl 2/13 Installing : heartbeat-pils 3/13 Installing : openhpi-libs 4/13 Installing : libnet 5/13 Installing : net-snmp-libs 6/13 Installing : heartbeat-pils 7/13 Installing : heartbeat-stonith 8/13 Installing : openhpi-libs 9/13 Installing : heartbeat-stonith 10/13 Installing : libnet 11/13 Installing : heartbeat 12/13 Installing : heartbeat 13/13 Installed: heartbeat.i386 0:2.1.4-11.el5 heartbeat.x86_64 0:2.1.4-11.el5 Dependency Installed: heartbeat-pils.i386 0:2.1.4-11.el5 heartbeat-pils.x86_64 0:2.1.4-11.el5 heartbeat-stonith.i386 0:2.1.4-11.el5 heartbeat-stonith.x86_64 0:2.1.4-11.el5 libnet.i386 0:1.1.6-7.el5 libnet.x86_64 0:1.1.6-7.el5 libtool-ltdl.i386 0:1.5.22-7.el5_4 libtool-ltdl.x86_64 0:1.5.22-7.el5_4 net-snmp-libs.i386 1:5.3.2.2-20.el5 openhpi-libs.i386 0:2.14.0-5.el5 openhpi-libs.x86_64 0:2.14.0-5.el5 Complete! [root@h1 software]# |
第二种方式。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
[root@ha1 ~]# cd /software [root@ha1 software]# mkdir epel [root@ha1 software]# cd epel [root@ha1 epel]# ls [root@ha1 epel]# [root@ha1 epel]# env | grep LANG LANG=en_US.UTF-8 [root@ha1 epel]# wget http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm --2015-12-17 02:20:34-- http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm Resolving dl.fedoraproject.org... 209.132.181.27, 209.132.181.24, 209.132.181.25, ... Connecting to dl.fedoraproject.org|209.132.181.27|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 12232 (12K) [application/x-rpm] Saving to: `epel-release-5-4.noarch.rpm' 100%[=============================================================>] 12,232 --.-K/s in 0.001s 2015-12-17 02:20:34 (20.5 MB/s) - `epel-release-5-4.noarch.rpm' saved [12232/12232] [root@ha1 epel]# [root@ha1 epel]# ll total 12 -rw-r--r-- 1 root root 12232 Aug 12 2010 epel-release-5-4.noarch.rpm [root@ha1 epel]# [root@ha1 epel]# rpm -ivh epel* warning: epel-release-5-4.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 217521f6 Preparing... ########################################### [100%] 1:epel-release ########################################### [100%] [root@ha1 epel]# [root@ha1 epel]# ls /etc/yum.repos.d epel.repo epel-testing.repo iso.repo redhat.repo [root@ha1 epel]# [root@ha1 epel]# yum repolist Loaded plugins: katello, product-id, security, subscription-manager Updating certificate-based repositories. Unable to read consumer identity Cluster | 1.5 kB 00:00 ClusterStorage | 1.5 kB 00:00 Server | 1.5 kB 00:00 VT | 1.3 kB 00:00 epel | 3.6 kB 00:00 epel/primary_db | 2.9 MB 00:11 repo id repo name status Cluster Cluster 32 ClusterStorage ClusterStorage 39 Server Server 3,285 VT VT 58 epel Extra Packages for Enterprise Linux 5 - x86_64 6,744 repolist: 10,158 [root@ha1 epel]# |
执行安装:
yum install heartbeat*
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
[root@ha1 epel]# yum list | grep --color heartbeat Unable to read consumer identity heartbeat.i386 2.1.4-11.el5 epel heartbeat.x86_64 2.1.4-11.el5 epel heartbeat-devel.i386 2.1.4-11.el5 epel heartbeat-devel.x86_64 2.1.4-11.el5 epel heartbeat-gui.x86_64 2.1.4-11.el5 epel heartbeat-ldirectord.x86_64 2.1.4-11.el5 epel heartbeat-pils.i386 2.1.4-11.el5 epel heartbeat-pils.x86_64 2.1.4-11.el5 epel heartbeat-stonith.i386 2.1.4-11.el5 epel heartbeat-stonith.x86_64 2.1.4-11.el5 epel [root@ha1 epel]# [root@ha1 epel]# find /var/cache/yum/ -name "heartbeat*" [root@ha1 epel]# [root@ha1 epel]# yum install heartbeat* Loaded plugins: katello, product-id, security, subscription-manager Updating certificate-based repositories. Unable to read consumer identity Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package heartbeat.i386 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libnet.so.1 for package: heartbeat --> Processing Dependency: libltdl.so.3 for package: heartbeat ---> Package heartbeat.x86_64 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libnet.so.1()(64bit) for package: heartbeat --> Processing Dependency: libltdl.so.3()(64bit) for package: heartbeat ---> Package heartbeat-devel.i386 0:2.1.4-11.el5 set to be updated ---> Package heartbeat-devel.x86_64 0:2.1.4-11.el5 set to be updated ---> Package heartbeat-gui.x86_64 0:2.1.4-11.el5 set to be updated ---> Package heartbeat-ldirectord.x86_64 0:2.1.4-11.el5 set to be updated --> Processing Dependency: perl(Mail::Send) for package: heartbeat-ldirectord --> Processing Dependency: perl(LWP::UserAgent) for package: heartbeat-ldirectord --> Processing Dependency: perl(LWP::Debug) for package: heartbeat-ldirectord --> Processing Dependency: ipvsadm for package: heartbeat-ldirectord ---> Package heartbeat-pils.i386 0:2.1.4-11.el5 set to be updated ---> Package heartbeat-pils.x86_64 0:2.1.4-11.el5 set to be updated ---> Package heartbeat-stonith.i386 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libopenhpi.so.2 for package: heartbeat-stonith --> Processing Dependency: libnetsnmp.so.10 for package: heartbeat-stonith ---> Package heartbeat-stonith.x86_64 0:2.1.4-11.el5 set to be updated --> Processing Dependency: libopenhpi.so.2()(64bit) for package: heartbeat-stonith --> Running transaction check ---> Package ipvsadm.x86_64 0:1.24-13.el5 set to be updated ---> Package libnet.i386 0:1.1.6-7.el5 set to be updated ---> Package libnet.x86_64 0:1.1.6-7.el5 set to be updated ---> Package libtool-ltdl.i386 0:1.5.22-7.el5_4 set to be updated ---> Package libtool-ltdl.x86_64 0:1.5.22-7.el5_4 set to be updated ---> Package net-snmp-libs.i386 1:5.3.2.2-17.el5 set to be updated ---> Package openhpi-libs.i386 0:2.14.0-5.el5 set to be updated ---> Package openhpi-libs.x86_64 0:2.14.0-5.el5 set to be updated ---> Package perl-MailTools.noarch 0:1.77-1.el5 set to be updated --> Processing Dependency: perl(Date::Parse) for package: perl-MailTools --> Processing Dependency: perl(Date::Format) for package: perl-MailTools ---> Package perl-libwww-perl.noarch 0:5.805-1.1.1 set to be updated --> Processing Dependency: perl-HTML-Parser >= 3.33 for package: perl-libwww-perl --> Processing Dependency: perl(HTML::Entities) for package: perl-libwww-perl --> Processing Dependency: perl(Compress::Zlib) for package: perl-libwww-perl --> Running transaction check ---> Package perl-Compress-Zlib.x86_64 0:1.42-1.fc6 set to be updated ---> Package perl-HTML-Parser.x86_64 0:3.55-1.fc6 set to be updated --> Processing Dependency: perl-HTML-Tagset >= 3.03 for package: perl-HTML-Parser --> Processing Dependency: perl(HTML::Tagset) for package: perl-HTML-Parser ---> Package perl-TimeDate.noarch 1:1.16-5.el5 set to be updated --> Running transaction check ---> Package perl-HTML-Tagset.noarch 0:3.10-2.1.1 set to be updated --> Finished Dependency Resolution Dependencies Resolved ======================================================================================================= Package Arch Version Repository Size ======================================================================================================= Installing: heartbeat i386 2.1.4-11.el5 epel 1.6 M heartbeat x86_64 2.1.4-11.el5 epel 1.6 M heartbeat-devel i386 2.1.4-11.el5 epel 287 k heartbeat-devel x86_64 2.1.4-11.el5 epel 287 k heartbeat-gui x86_64 2.1.4-11.el5 epel 226 k heartbeat-ldirectord x86_64 2.1.4-11.el5 epel 109 k heartbeat-pils i386 2.1.4-11.el5 epel 90 k heartbeat-pils x86_64 2.1.4-11.el5 epel 91 k heartbeat-stonith i386 2.1.4-11.el5 epel 175 k heartbeat-stonith x86_64 2.1.4-11.el5 epel 186 k Installing for dependencies: ipvsadm x86_64 1.24-13.el5 Cluster 34 k libnet i386 1.1.6-7.el5 epel 63 k libnet x86_64 1.1.6-7.el5 epel 62 k libtool-ltdl i386 1.5.22-7.el5_4 Server 37 k libtool-ltdl x86_64 1.5.22-7.el5_4 Server 38 k net-snmp-libs i386 1:5.3.2.2-17.el5 Server 1.3 M openhpi-libs i386 2.14.0-5.el5 Server 157 k openhpi-libs x86_64 2.14.0-5.el5 Server 168 k perl-Compress-Zlib x86_64 1.42-1.fc6 Server 52 k perl-HTML-Parser x86_64 3.55-1.fc6 Server 92 k perl-HTML-Tagset noarch 3.10-2.1.1 Server 14 k perl-MailTools noarch 1.77-1.el5 epel 90 k perl-TimeDate noarch 1:1.16-5.el5 Server 32 k perl-libwww-perl noarch 5.805-1.1.1 Server 376 k Transaction Summary ======================================================================================================= Install 24 Package(s) Upgrade 0 Package(s) Total download size: 7.1 M Is this ok [y/N]: y Downloading Packages: (1/24): libnet-1.1.6-7.el5.x86_64.rpm | 62 kB 00:00 (2/24): libnet-1.1.6-7.el5.i386.rpm | 63 kB 00:00 (3/24): perl-MailTools-1.77-1.el5.noarch.rpm | 90 kB 00:00 (4/24): heartbeat-pils-2.1.4-11.el5.i386.rpm | 90 kB 00:00 (5/24): heartbeat-pils-2.1.4-11.el5.x86_64.rpm | 91 kB 00:00 (6/24): heartbeat-ldirectord-2.1.4-11.el5.x86_64.rpm | 109 kB 00:00 (7/24): heartbeat-stonith-2.1.4-11.el5.i386.rpm | 175 kB 00:00 (8/24): heartbeat-stonith-2.1.4-11.el5.x86_64.rpm | 186 kB 00:00 (9/24): heartbeat-gui-2.1.4-11.el5.x86_64.rpm | 226 kB 00:00 (10/24): heartbeat-devel-2.1.4-11.el5.i386.rpm | 287 kB 00:01 (11/24): heartbeat-devel-2.1.4-11.el5.x86_64.rpm | 287 kB 00:01 (12/24): heartbeat-2.1.4-11.el5.i386.rpm | 1.6 MB 00:11 (13/24): heartbeat-2.1.4-11.el5.x86_64.rpm | 1.6 MB 00:06 ------------------------------------------------------------------------------------------------------- Total 285 kB/s | 7.1 MB 00:25 warning: rpmts_HdrFromFdno: Header V3 DSA signature: NOKEY, key ID 217521f6 epel/gpgkey | 1.7 kB 00:00 Importing GPG key 0x217521F6 "Fedora EPEL <epel@fedoraproject.org>" from /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : libtool-ltdl 1/24 Installing : heartbeat-pils 2/24 Installing : openhpi-libs 3/24 Installing : libnet 4/24 Installing : perl-Compress-Zlib 5/24 Installing : ipvsadm 6/24 Installing : libtool-ltdl 7/24 Installing : heartbeat-pils 8/24 Installing : net-snmp-libs 9/24 Installing : heartbeat-stonith 10/24 Installing : heartbeat 11/24 Installing : openhpi-libs 12/24 Installing : heartbeat-stonith 13/24 Installing : libnet 14/24 Installing : perl-HTML-Tagset 15/24 Installing : perl-HTML-Parser 16/24 Installing : perl-libwww-perl 17/24 Installing : perl-TimeDate 18/24 Installing : heartbeat-gui 19/24 Installing : perl-MailTools 20/24 Installing : heartbeat-ldirectord 21/24 Installing : heartbeat-devel 22/24 Installing : heartbeat 23/24 Installing : heartbeat-devel 24/24 Cluster/productid | 1.7 kB 00:00 ClusterStorage/productid | 1.8 kB 00:00 Server/productid | 1.7 kB 00:00 Installed products updated. Installed: heartbeat.i386 0:2.1.4-11.el5 heartbeat.x86_64 0:2.1.4-11.el5 heartbeat-devel.i386 0:2.1.4-11.el5 heartbeat-devel.x86_64 0:2.1.4-11.el5 heartbeat-gui.x86_64 0:2.1.4-11.el5 heartbeat-ldirectord.x86_64 0:2.1.4-11.el5 heartbeat-pils.i386 0:2.1.4-11.el5 heartbeat-pils.x86_64 0:2.1.4-11.el5 heartbeat-stonith.i386 0:2.1.4-11.el5 heartbeat-stonith.x86_64 0:2.1.4-11.el5 Dependency Installed: ipvsadm.x86_64 0:1.24-13.el5 libnet.i386 0:1.1.6-7.el5 libnet.x86_64 0:1.1.6-7.el5 libtool-ltdl.i386 0:1.5.22-7.el5_4 libtool-ltdl.x86_64 0:1.5.22-7.el5_4 net-snmp-libs.i386 1:5.3.2.2-17.el5 openhpi-libs.i386 0:2.14.0-5.el5 openhpi-libs.x86_64 0:2.14.0-5.el5 perl-Compress-Zlib.x86_64 0:1.42-1.fc6 perl-HTML-Parser.x86_64 0:3.55-1.fc6 perl-HTML-Tagset.noarch 0:3.10-2.1.1 perl-MailTools.noarch 0:1.77-1.el5 perl-TimeDate.noarch 1:1.16-5.el5 perl-libwww-perl.noarch 0:5.805-1.1.1 Complete! [root@ha1 epel]# [root@ha1 epel]# find /var/cache/yum/ -name "heartbeat*" /var/cache/yum/epel/packages/heartbeat-gui-2.1.4-11.el5.x86_64.rpm /var/cache/yum/epel/packages/heartbeat-devel-2.1.4-11.el5.x86_64.rpm /var/cache/yum/epel/packages/heartbeat-ldirectord-2.1.4-11.el5.x86_64.rpm /var/cache/yum/epel/packages/heartbeat-pils-2.1.4-11.el5.x86_64.rpm /var/cache/yum/epel/packages/heartbeat-2.1.4-11.el5.x86_64.rpm /var/cache/yum/epel/packages/heartbeat-stonith-2.1.4-11.el5.x86_64.rpm /var/cache/yum/epel/packages/heartbeat-2.1.4-11.el5.i386.rpm /var/cache/yum/epel/packages/heartbeat-pils-2.1.4-11.el5.i386.rpm /var/cache/yum/epel/packages/heartbeat-devel-2.1.4-11.el5.i386.rpm /var/cache/yum/epel/packages/heartbeat-stonith-2.1.4-11.el5.i386.rpm [root@ha1 epel]# find /var/cache/yum/ -name "heartbeat*" | wc -l 10 [root@ha1 epel]# |
安装完成后:
新增用户:
1 2 3 |
[root@h1 software]# id hacluster uid=101(hacluster) gid=160(haclient) groups=160(haclient) [root@h1 software]# |
Heartbeat主目录:
1 2 3 4 5 6 7 8 |
[root@h1 software]# ll /etc/ha.d/ total 24 -rwxr-xr-x 1 root root 745 Mar 20 2010 harc drwxr-xr-x 2 root root 4096 Dec 16 14:42 rc.d -rw-r--r-- 1 root root 692 Mar 20 2010 README.config drwxr-xr-x 2 root root 4096 Dec 16 14:42 resource.d -rw-r--r-- 1 root root 7862 Mar 20 2010 shellfuncs [root@h1 software]# |
Heartbeat占用的网络端口:
1 2 3 4 5 6 |
[root@h1 ~]# netstat -tupln | grep --color heartbeat udp 0 0 0.0.0.0:9604 0.0.0.0:* 4137/heartbeat: wri udp 0 0 0.0.0.0:694 0.0.0.0:* 4137/heartbeat: wri udp 0 0 0.0.0.0:694 0.0.0.0:* 4135/heartbeat: wri udp 0 0 0.0.0.0:9152 0.0.0.0:* 4135/heartbeat: wri [root@h1 ~]# |
至此,Heartbeat安装完成。
——————
安装Oracle:
上传软件介质,并解压执行:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
[root@h1 software]# ll total 741392 -rw-r--r-- 1 root root 758433170 Jul 14 2010 10201_database_linux_x86_64.cpio.gz drwxr-xr-x 3 root root 4096 Dec 16 14:39 heartbeat [root@h1 software]# [root@h1 software]# chmod -R 777 /software [root@h1 software]# su - oracle [oracle@h1 ~]$ cd /software [oracle@h1 software]$ gunzip 10201_database_linux_x86_64.cpio.gz [oracle@h1 software]$ cpio -idmv < 10201_database_linux_x86_64.cpio (... ...etc。) [oracle@h1 software]$ ls database heartbeat [oracle@h1 software]$ cd database/ [oracle@h1 database]$ ls doc install response runInstaller stage welcome.html [oracle@h1 database]$ [oracle@h1 database]$ export DISPLAY=192.168.56.1:0.0 [oracle@h1 database]$ xhost + access control disabled, clients can connect from any host xhost: must be on local machine to enable or disable access control. [oracle@h1 database]$ [oracle@h1 database]$ ./runInstaller Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2 Passed All installer requirements met. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-12-16_02-59-59PM. Please wait ... [oracle@h1 database]$ |
接下来就是图形化的安装。
图像化的安装部分没有什么特别的,直接下一步就可以。(之前的文档中可以找到这方面的截图,故不罗列了。)
需要注意的是,这里目前只安装软件,不创建数据库。
如图:
正在安装Oracle软件,… 耐心等待一段时间即可。(两台Heartbeat节点都需要安装。)
执行脚本:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
[root@h1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory to 770. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete [root@h1 ~]# [root@h1 ~]# /u01/app/product/10.2.0/db_1/root.sh Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/product/10.2.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. [root@h1 ~]# |
至此,Oracle数据库软件安装完成。
——————
关于创建数据库(说明)。
创建数据库的过程只需要在Linux HA的其中一个节点上操作即可。
需要注意的是:数据库需要创建到共享存储上,而不是本地。
——————
关于配置Heartbeat资源(说明与操作)。
因此,在创建数据库之前,需要先配置好Heartbeat的共享存储资源,并保证HA功能正常。
下面开始:
格式化共享存储分区,将其设置为ext3:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
[root@h1 ~]# mkfs.ext3 /dev/sdb1 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 2501856 inodes, 4997116 blocks 249855 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 153 block groups 32768 blocks per group, 32768 fragments per group 16352 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 39 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@h1 ~]# |
创建挂载点:
1 2 3 4 5 6 |
[root@h1 ~]# mkdir /oradata [root@h1 ~]# [root@h1 ~]# ll -d /oradata drwxr-xr-x 2 root root 4096 Dec 16 15:14 /oradata [root@h1 ~]# chown -R oracle.oinstall /oradata [root@h1 ~]# |
正式配置Heartbeat之前可以看看它的说明文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
[root@ha2 epel]# ls /etc/ha.d harc rc.d README.config resource.d shellfuncs [root@ha2 epel]# [root@ha2 epel]# cat /etc/ha.d/README.config You need three configuration files to make heartbeat happy, and they all go in this directory. They are: ha.cf Main configuration file haresources Resource configuration file authkeys Authentication information These first two may be readable by everyone, but the authkeys file must not be. The good news is that sample versions of these files may be found in the documentation directory (providing you installed the documentation). If you installed heartbeat using rpm packages then this command will show you where they are on your system: rpm -q heartbeat -d If you installed heartbeat using Debian packages then the documentation should be located in /usr/share/doc/heartbeat [root@ha2 epel]# |
模板文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
[root@ha2 doc]# pwd /usr/share/doc [root@ha2 doc]# ls | grep heartbeat heartbeat-2.1.4 heartbeat-ldirectord-2.1.4 heartbeat-pils-2.1.4 heartbeat-stonith-2.1.4 [root@ha2 doc]# [root@ha2 doc]# file `ls | grep heartbeat` heartbeat-2.1.4: directory heartbeat-ldirectord-2.1.4: directory heartbeat-pils-2.1.4: directory heartbeat-stonith-2.1.4: directory [root@ha2 doc]# [root@ha2 doc]# cd heartbeat-2.1.4/ [root@ha2 heartbeat-2.1.4]# ls apphbd.cf COPYING.LGPL GettingStarted.txt hb_report.html README startstop authkeys DirectoryMap.txt ha.cf hb_report.txt Requirements.html AUTHORS faqntips.html HardwareGuide.html heartbeat_api.html Requirements.txt ChangeLog faqntips.txt HardwareGuide.txt heartbeat_api.txt rsync.html COPYING GettingStarted.html haresources logd.cf rsync.txt [root@ha2 heartbeat-2.1.4]# |
文件:/etc/ha.d/authkeys
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
模板文件: [root@ha2 heartbeat-2.1.4]# cat authkeys # # Authentication file. Must be mode 600 # # # Must have exactly one auth directive at the front. # auth send authentication using this method-id # # Then, list the method and key that go with that method-id # # Available methods: crc sha1, md5. Crc doesn't need/want a key. # # You normally only have one authentication method-id listed in this file # # Put more than one to make a smooth transition when changing auth # methods and/or keys. # # # sha1 is believed to be the "best", md5 next best. # # crc adds no security, except from packet corruption. # Use only on physically secure networks. # #auth 1 #1 crc #2 sha1 HI! #3 md5 Hello! [root@ha2 heartbeat-2.1.4]# 当前的设置: [root@h1 ha.d]# pwd /etc/ha.d [root@h1 ha.d]# cat authkeys auth 1 1 crc [root@h1 ha.d]# [root@h1 ha.d]# chmod -R 600 authkeys [root@h1 ha.d]# |
该文件权限必须为600,否则,启动Heartbeat服务的时候会遇到报错:
heartbeat[5990]: 2015/12/16_15:23:09 ERROR: Bad permissions on keyfile [/etc/ha.d/authkeys], 600 recommended.
heartbeat[5990]: 2015/12/16_15:23:09 ERROR: Authentication configuration error.
heartbeat[5990]: 2015/12/16_15:23:09 ERROR: Configuration error, heartbeat not started.
文件:/etc/ha.d/ha.cf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 |
模板文件: [root@ha2 heartbeat-2.1.4]# cat ha.cf # # There are lots of options in this file. All you have to have is a set # of nodes listed {"node ...} one of {serial, bcast, mcast, or ucast}, # and a value for "auto_failback". # # ATTENTION: As the configuration file is read line by line, # THE ORDER OF DIRECTIVE MATTERS! # # In particular, make sure that the udpport, serial baud rate # etc. are set before the heartbeat media are defined! # debug and log file directives go into effect when they # are encountered. # # All will be fine if you keep them ordered as in this example. # # # Note on logging: # If all of debugfile, logfile and logfacility are not defined, # logging is the same as use_logd yes. In other case, they are # respectively effective. if detering the logging to syslog, # logfacility must be "none". # # File to write debug messages to #debugfile /var/log/ha-debug # # # File to write other messages to # #logfile /var/log/ha-log # # # Facility to use for syslog()/logger # logfacility local0 # # # A note on specifying "how long" times below... # # The default time unit is seconds # 10 means ten seconds # # You can also specify them in milliseconds # 1500ms means 1.5 seconds # # # keepalive: how long between heartbeats? # #keepalive 2 # # deadtime: how long-to-declare-host-dead? # # If you set this too low you will get the problematic # split-brain (or cluster partition) problem. # See the FAQ for how to use warntime to tune deadtime. # #deadtime 30 # # warntime: how long before issuing "late heartbeat" warning? # See the FAQ for how to use warntime to tune deadtime. # #warntime 10 # # # Very first dead time (initdead) # # On some machines/OSes, etc. the network takes a while to come up # and start working right after you've been rebooted. As a result # we have a separate dead time for when things first come up. # It should be at least twice the normal dead time. # #initdead 120 # # # What UDP port to use for bcast/ucast communication? # #udpport 694 # # Baud rate for serial ports... # #baud 19200 # # serial serialportname ... #serial /dev/ttyS0 # Linux #serial /dev/cuaa0 # FreeBSD #serial /dev/cuad0 # FreeBSD 6.x #serial /dev/cua/a # Solaris # # # What interfaces to broadcast heartbeats over? # #bcast eth0 # Linux #bcast eth1 eth2 # Linux #bcast le0 # Solaris #bcast le1 le2 # Solaris # # Set up a multicast heartbeat medium # mcast [dev] [mcast group] [port] [ttl] [loop] # # [dev] device to send/rcv heartbeats on # [mcast group] multicast group to join (class D multicast address # 224.0.0.0 - 239.255.255.255) # [port] udp port to sendto/rcvfrom (set this value to the # same value as "udpport" above) # [ttl] the ttl value for outbound heartbeats. this effects # how far the multicast packet will propagate. (0-255) # Must be greater than zero. # [loop] toggles loopback for outbound multicast heartbeats. # if enabled, an outbound packet will be looped back and # received by the interface it was sent on. (0 or 1) # Set this value to zero. # # #mcast eth0 225.0.0.1 694 1 0 # # Set up a unicast / udp heartbeat medium # ucast [dev] [peer-ip-addr] # # [dev] device to send/rcv heartbeats on # [peer-ip-addr] IP address of peer to send packets to # #ucast eth0 192.168.1.2 # # # About boolean values... # # Any of the following case-insensitive values will work for true: # true, on, yes, y, 1 # Any of the following case-insensitive values will work for false: # false, off, no, n, 0 # # # # auto_failback: determines whether a resource will # automatically fail back to its "primary" node, or remain # on whatever node is serving it until that node fails, or # an administrator intervenes. # # The possible values for auto_failback are: # on - enable automatic failbacks # off - disable automatic failbacks # legacy - enable automatic failbacks in systems # where all nodes do not yet support # the auto_failback option. # # auto_failback "on" and "off" are backwards compatible with the old # "nice_failback on" setting. # # See the FAQ for information on how to convert # from "legacy" to "on" without a flash cut. # (i.e., using a "rolling upgrade" process) # # The default value for auto_failback is "legacy", which # will issue a warning at startup. So, make sure you put # an auto_failback directive in your ha.cf file. # (note: auto_failback can be any boolean or "legacy") # auto_failback on # # # Basic STONITH support # Using this directive assumes that there is one stonith # device in the cluster. Parameters to this device are # read from a configuration file. The format of this line is: # # stonith <stonith_type> <configfile> # # NOTE: it is up to you to maintain this file on each node in the # cluster! # #stonith baytech /etc/ha.d/conf/stonith.baytech # # STONITH support # You can configure multiple stonith devices using this directive. # The format of the line is: # stonith_host <hostfrom> <stonith_type> <params...> # <hostfrom> is the machine the stonith device is attached # to or * to mean it is accessible from any host. # <stonith_type> is the type of stonith device (a list of # supported drives is in /usr/lib/stonith.) # <params...> are driver specific parameters. To see the # format for a particular device, run: # stonith -l -t <stonith_type> # # # Note that if you put your stonith device access information in # here, and you make this file publically readable, you're asking # for a denial of service attack ;-) # # To get a list of supported stonith devices, run # stonith -L # For detailed information on which stonith devices are supported # and their detailed configuration options, run this command: # stonith -h # #stonith_host * baytech 10.0.0.3 mylogin mysecretpassword #stonith_host ken3 rps10 /dev/ttyS1 kathy 0 #stonith_host kathy rps10 /dev/ttyS1 ken3 0 # # Watchdog is the watchdog timer. If our own heart doesn't beat for # a minute, then our machine will reboot. # NOTE: If you are using the software watchdog, you very likely # wish to load the module with the parameter "nowayout=0" or # compile it without CONFIG_WATCHDOG_NOWAYOUT set. Otherwise even # an orderly shutdown of heartbeat will trigger a reboot, which is # very likely NOT what you want. # #watchdog /dev/watchdog # # Tell what machines are in the cluster # node nodename ... -- must match uname -n #node ken3 #node kathy # # Less common options... # # Treats 10.10.10.254 as a psuedo-cluster-member # Used together with ipfail below... # note: don't use a cluster node as ping node # #ping 10.10.10.254 # # Treats 10.10.10.254 and 10.10.10.253 as a psuedo-cluster-member # called group1. If either 10.10.10.254 or 10.10.10.253 are up # then group1 is up # Used together with ipfail below... # #ping_group group1 10.10.10.254 10.10.10.253 # # HBA ping derective for Fiber Channel # Treats fc-card-name as psudo-cluster-member # used with ipfail below ... # # You can obtain HBAAPI from http://hbaapi.sourceforge.net. You need # to get the library specific to your HBA directly from the vender # To install HBAAPI stuff, all You need to do is to compile the common # part you obtained from the sourceforge. This will produce libHBAAPI.so # which you need to copy to /usr/lib. You need also copy hbaapi.h to # /usr/include. # # The fc-card-name is the name obtained from the hbaapitest program # that is part of the hbaapi package. Running hbaapitest will produce # a verbose output. One of the first line is similar to: # Apapter number 0 is named: qlogic-qla2200-0 # Here fc-card-name is qlogic-qla2200-0. # #hbaping fc-card-name # # # Processes started and stopped with heartbeat. Restarted unless # they exit with rc=100 # #respawn userid /path/name/to/run #respawn hacluster /usr/lib/heartbeat/ipfail # # Access control for client api # default is no access # #apiauth client-name gid=gidlist uid=uidlist #apiauth ipfail gid=haclient uid=hacluster ########################### # # Unusual options. # ########################### # # hopfudge maximum hop count minus number of nodes in config #hopfudge 1 # # deadping - dead time for ping nodes #deadping 30 # # hbgenmethod - Heartbeat generation number creation method # Normally these are stored on disk and incremented as needed. #hbgenmethod time # # realtime - enable/disable realtime execution (high priority, etc.) # defaults to on #realtime off # # debug - set debug level # defaults to zero #debug 1 # # API Authentication - replaces the fifo-permissions-based system of the past # # # You can put a uid list and/or a gid list. # If you put both, then a process is authorized if it qualifies under either # the uid list, or under the gid list. # # The groupname "default" has special meaning. If it is specified, then # this will be used for authorizing groupless clients, and any client groups # not otherwise specified. # # There is a subtle exception to this. "default" will never be used in the # following cases (actual default auth directives noted in brackets) # ipfail (uid=HA_CCMUSER) # ccm (uid=HA_CCMUSER) # ping (gid=HA_APIGROUP) # cl_status (gid=HA_APIGROUP) # # This is done to avoid creating a gaping security hole and matches the most # likely desired configuration. # #apiauth ipfail uid=hacluster #apiauth ccm uid=hacluster #apiauth cms uid=hacluster #apiauth ping gid=haclient uid=alanr,root #apiauth default gid=haclient # message format in the wire, it can be classic or netstring, # default: classic #msgfmt classic/netstring # Do we use logging daemon? # If logging daemon is used, logfile/debugfile/logfacility in this file # are not meaningful any longer. You should check the config file for logging # daemon (the default is /etc/logd.cf) # more infomartion can be fould in http://www.linux-ha.org/ha_2ecf_2fUseLogdDirective # Setting use_logd to "yes" is recommended # # use_logd yes/no # # the interval we reconnect to logging daemon if the previous connection failed # default: 60 seconds #conn_logd_time 60 # # # Configure compression module # It could be zlib or bz2, depending on whether u have the corresponding # library in the system. #compression bz2 # # Confiugre compression threshold # This value determines the threshold to compress a message, # e.g. if the threshold is 1, then any message with size greater than 1 KB # will be compressed, the default is 2 (KB) #compression_threshold 2 [root@ha2 heartbeat-2.1.4]# 当前的配置: [root@h1 ha.d]# cat ha.cf logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 30 warntime 10 initdead 120 udpport 694 baud 19200 bcast eth0 # Linux ucast eth0 192.168.56.140 auto_failback on watchdog /dev/watchdog node ha1 node ha2 compression bz2 compression_threshold 2 [root@h1 ha.d]# |
上面需要注意的是:
ucast指定的IP为当前节点的IP。
此外,在声明节点的时候,两个节点不能写成“node h1 h2”的形式,一定要分开写“node h1”、“node h2”。
文件:/etc/ha.d/haresources
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
模板文件: [root@ha2 heartbeat-2.1.4]# cat haresources # # This is a list of resources that move from machine to machine as # nodes go down and come up in the cluster. Do not include # "administrative" or fixed IP addresses in this file. # # <VERY IMPORTANT NOTE> # The haresources files MUST BE IDENTICAL on all nodes of the cluster. # # The node names listed in front of the resource group information # is the name of the preferred node to run the service. It is # not necessarily the name of the current machine. If you are running # auto_failback ON (or legacy), then these services will be started # up on the preferred nodes - any time they're up. # # If you are running with auto_failback OFF, then the node information # will be used in the case of a simultaneous start-up, or when using # the hb_standby {foreign,local} command. # # BUT FOR ALL OF THESE CASES, the haresources files MUST BE IDENTICAL. # If your files are different then almost certainly something # won't work right. # </VERY IMPORTANT NOTE> # # # We refer to this file when we're coming up, and when a machine is being # taken over after going down. # # You need to make this right for your installation, then install it in # /etc/ha.d # # Each logical line in the file constitutes a "resource group". # A resource group is a list of resources which move together from # one node to another - in the order listed. It is assumed that there # is no relationship between different resource groups. These # resource in a resource group are started left-to-right, and stopped # right-to-left. Long lists of resources can be continued from line # to line by ending the lines with backslashes ("\"). # # These resources in this file are either IP addresses, or the name # of scripts to run to "start" or "stop" the given resource. # # The format is like this: # #node-name resource1 resource2 ... resourceN # # # If the resource name contains an :: in the middle of it, the # part after the :: is passed to the resource script as an argument. # Multiple arguments are separated by the :: delimeter # # In the case of IP addresses, the resource script name IPaddr is # implied. # # For example, the IP address 135.9.8.7 could also be represented # as IPaddr::135.9.8.7 # # THIS IS IMPORTANT!! vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv # # The given IP address is directed to an interface which has a route # to the given address. This means you have to have a net route # set up outside of the High-Availability structure. We don't set it # up here -- we key off of it. # # The broadcast address for the IP alias that is created to support # an IP address defaults to the highest address on the subnet. # # The netmask for the IP alias that is created defaults to the same # netmask as the route that it selected in in the step above. # # The base interface for the IPalias that is created defaults to the # same netmask as the route that it selected in in the step above. # # If you want to specify that this IP address is to be brought up # on a subnet with a netmask of 255.255.255.0, you would specify # this as IPaddr::135.9.8.7/24 . # # If you wished to tell it that the broadcast address for this subnet # was 135.9.8.210, then you would specify that this way: # IPaddr::135.9.8.7/24/135.9.8.210 # # If you wished to tell it that the interface to add the address to # is eth0, then you would need to specify it this way: # IPaddr::135.9.8.7/24/eth0 # # And this way to specify both the broadcast address and the # interface: # IPaddr::135.9.8.7/24/eth0/135.9.8.210 # # The IP addresses you list in this file are called "service" addresses, # since they're they're the publicly advertised addresses that clients # use to get at highly available services. # # For a hot/standby (non load-sharing) 2-node system with only # a single service address, # you will probably only put one system name and one IP address in here. # The name you give the address to is the name of the default "hot" # system. # # Where the nodename is the name of the node which "normally" owns the # resource. If this machine is up, it will always have the resource # it is shown as owning. # # The string you put in for nodename must match the uname -n name # of your machine. Depending on how you have it administered, it could # be a short name or a FQDN. # #------------------------------------------------------------------- # # Simple case: One service address, default subnet and netmask # No servers that go up and down with the IP address # #just.linux-ha.org 135.9.216.110 # #------------------------------------------------------------------- # # Assuming the adminstrative addresses are on the same subnet... # A little more complex case: One service address, default subnet # and netmask, and you want to start and stop http when you get # the IP address... # #just.linux-ha.org 135.9.216.110 http #------------------------------------------------------------------- # # A little more complex case: Three service addresses, default subnet # and netmask, and you want to start and stop http when you get # the IP address... # #just.linux-ha.org 135.9.216.110 135.9.215.111 135.9.216.112 httpd #------------------------------------------------------------------- # # One service address, with the subnet, interface and bcast addr # explicitly defined. # #just.linux-ha.org 135.9.216.3/28/eth0/135.9.216.12 httpd # #------------------------------------------------------------------- # # An example where a shared filesystem is to be used. # Note that multiple aguments are passed to this script using # the delimiter '::' to separate each argument. # #node1 10.0.0.170 Filesystem::/dev/sda1::/data1::ext2 # # Regarding the node-names in this file: # # They must match the names of the nodes listed in ha.cf, which in turn # must match the `uname -n` of some node in the cluster. So they aren't # virtual in any sense of the word. # [root@ha2 heartbeat-2.1.4]# 当前的配置: [root@h1 ha.d]# cat haresources #h1 IPaddr::192.168.56.150/24/eth0:0 Filesystem::/dev/sdb1::/oradata::ext3 #h2 IPaddr::192.168.56.150/24/eth0:0 Filesystem::/dev/sdb1::/oradata::ext3 #h1 IPaddr:192.168.56.150/24/eth0:0 #h1 IPaddr:192.168.56.151/24/eth0:0 h1 192.168.56.150 Filesystem::/dev/sdb1::/oradata::ext3 [root@h1 ha.d]# |
集群资源只需要写一条,第一位的ha1表示优先在哪个节点上启用。因为Heartbeat的架构是主备而非双活,所以同一时间,资源只会在集群的一个节点上启用,只有发生意外的情况下才会迁移到另一个节点。
当前,仅在节点一(Hostname:h1)上启动Heartbeat:
1 2 3 4 5 |
[root@h1 ha.d]# service heartbeat start Starting High-Availability services: 2015/12/16_19:20:55 INFO: Resource is stopped [ OK ] [root@h1 ha.d]# |
日志:/var/log/ha-log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
heartbeat[12559]: 2015/12/16_15:29:54 info: Version 2 support: false heartbeat[12559]: 2015/12/16_15:29:54 WARN: Logging daemon is disabled --enabling logging daemon is recommended heartbeat[12559]: 2015/12/16_15:29:54 info: ************************** heartbeat[12559]: 2015/12/16_15:29:54 info: Configuration validated. Starting heartbeat 2.1.4 heartbeat[12560]: 2015/12/16_15:29:54 info: heartbeat: version 2.1.4 heartbeat[12560]: 2015/12/16_15:29:54 info: Heartbeat generation: 1450308387 heartbeat[12560]: 2015/12/16_15:29:54 info: glib: UDP Broadcast heartbeat started on port 694 (694) interface eth0 heartbeat[12560]: 2015/12/16_15:29:54 info: glib: UDP Broadcast heartbeat closed on port 694 interface eth0 - Status: 1 heartbeat[12560]: 2015/12/16_15:29:54 info: G_main_add_TriggerHandler: Added signal manual handler heartbeat[12560]: 2015/12/16_15:29:54 info: G_main_add_TriggerHandler: Added signal manual handler heartbeat[12560]: 2015/12/16_15:29:54 info: G_main_add_SignalHandler: Added signal handler for signal 17 heartbeat[12560]: 2015/12/16_15:29:54 info: Comm_now_up(): updating status to active heartbeat[12560]: 2015/12/16_15:29:54 info: Local status now set to: 'active' heartbeat[12560]: 2015/12/16_15:29:54 info: Starting child client "/usr/lib64/heartbeat/ipfail" (101,160) heartbeat[12560]: 2015/12/16_15:29:54 info: Starting child client "/usr/lib64/heartbeat/ccm" (101,160) heartbeat[12560]: 2015/12/16_15:29:54 info: Local status now set to: 'up' heartbeat[12566]: 2015/12/16_15:29:54 info: Starting "/usr/lib64/heartbeat/ipfail" as uid 101 gid 160 (pid 12566) heartbeat[12567]: 2015/12/16_15:29:54 info: Starting "/usr/lib64/heartbeat/ccm" as uid 101 gid 160 (pid 12567) ccm[12567]: 2015/12/16_15:29:54 info: Hostname: h1 heartbeat[12560]: 2015/12/16_15:29:55 info: Link h1:eth0 up. ccm[12567]: 2015/12/16_15:29:58 info: G_main_add_SignalHandler: Added signal handler for signal 15 ipfail[12566]: 2015/12/16_15:29:58 info: Link Status update: Link h1/eth0 now has status up (... ... etc。) |
Heartbeat启动后:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
[root@h1 ha.d]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:72:72:1E inet addr:192.168.56.140 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe72:721e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13827 errors:0 dropped:0 overruns:0 frame:0 TX packets:16585 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2223385 (2.1 MiB) TX bytes:5551498 (5.2 MiB) eth0:0 Link encap:Ethernet HWaddr 00:0C:29:72:72:1E inet addr:192.168.56.150 Bcast:192.168.56.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:0C:29:72:72:28 inet addr:192.168.79.140 Bcast:192.168.79.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe72:7228/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3055 errors:0 dropped:0 overruns:0 frame:0 TX packets:4659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:782795 (764.4 KiB) TX bytes:492932 (481.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2823 errors:0 dropped:0 overruns:0 frame:0 TX packets:2823 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3820327 (3.6 MiB) TX bytes:3820327 (3.6 MiB) [root@h1 ha.d]# [root@h1 ha.d]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 15G 5.3G 8.4G 39% / /dev/sda1 289M 17M 258M 7% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sdb1 19G 173M 18G 1% /oradata [root@h1 ha.d]# |
如上,eth0:0和/oradata,都是通过Heartbeat的haresources自动起来的。
将节点一的(authkeys、haresources、ha.cf)复制给节点二,对ha.cf中的ucast做出适当的修改后,启动节点二的Heartbeat:service heartbeat start。
节点二的Heartbeat启动成功后,如果节点一的网络连接或者服务器宕机,则IP与共享存储资源将自动的迁移到节点二上:
然后,再在节点一的虚拟机里恢复节点一的网络:service network start。
如上,Heartbeat的共享存储资源部分就配完了。
——————
DBCA建库。
由于当前存储资源在节点一,所以,我们在节点一上调用DBCA建库。
1 2 3 4 5 6 7 8 9 10 |
[root@h1 ~]# hostname h1 [root@h1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 15G 5.3G 8.4G 39% / /dev/sda1 289M 17M 258M 7% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sr0 4.1G 4.1G 0 100% /media/RHEL_5.10 x86_64 DVD /dev/sdb1 19G 173M 18G 1% /oradata [root@h1 ~]# |
DISPLAY:
1 2 3 4 5 6 7 8 9 |
[root@h1 ~]# export DISPLAY=192.168.79.1:0.0 [root@h1 ~]# xhost + access control disabled, clients can connect from any host xhost: must be on local machine to enable or disable access control. [root@h1 ~]# su - oracle [oracle@h1 ~]$ xhost + access control disabled, clients can connect from any host xhost: must be on local machine to enable or disable access control. [oracle@h1 ~]$ |
目录权限:/oradata
1 2 3 4 5 6 7 8 |
[root@h1 ~]# ll -d /oradata/ drwxr-xr-x 3 root root 4096 Dec 16 15:13 /oradata/ [root@h1 ~]# [root@h1 ~]# chown -R oracle.oinstall /oradata/ [root@h1 ~]# [root@h1 ~]# ll -d /oradata/ drwxr-xr-x 3 oracle oinstall 4096 Dec 16 15:13 /oradata/ [root@h1 ~]# |
调用DBCA:
1 2 |
[oracle@h1 ~]$ dbca [oracle@h1 ~]$ |
查看一下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
[oracle@h1 ~]$ ps -ef | grep ora_ oracle 9744 1 0 21:09 ? 00:00:00 ora_j000_orcl oracle 9989 7467 0 21:10 pts/3 00:00:00 grep ora_ oracle 13713 1 0 20:28 ? 00:00:00 ora_pmon_orcl oracle 13715 1 0 20:28 ? 00:00:00 ora_psp0_orcl oracle 13721 1 0 20:28 ? 00:00:00 ora_mman_orcl oracle 13723 1 0 20:28 ? 00:00:00 ora_dbw0_orcl oracle 13725 1 0 20:28 ? 00:00:00 ora_lgwr_orcl oracle 13727 1 0 20:28 ? 00:00:00 ora_ckpt_orcl oracle 13729 1 0 20:28 ? 00:00:00 ora_smon_orcl oracle 13735 1 0 20:28 ? 00:00:00 ora_reco_orcl oracle 13737 1 0 20:28 ? 00:00:00 ora_cjq0_orcl oracle 13739 1 0 20:28 ? 00:00:00 ora_mmon_orcl oracle 13741 1 0 20:28 ? 00:00:00 ora_mmnl_orcl oracle 13743 1 0 20:28 ? 00:00:00 ora_d000_orcl oracle 13745 1 0 20:28 ? 00:00:00 ora_s000_orcl oracle 14082 1 0 20:29 ? 00:00:00 ora_qmnc_orcl oracle 14467 1 0 20:29 ? 00:00:00 ora_q001_orcl oracle 14472 1 0 20:29 ? 00:00:00 ora_q002_orcl [oracle@h1 ~]$ [oracle@h1 ~]$ export ORACLE_SID=orcl [oracle@h1 ~]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Wed Dec 16 21:10:19 2015 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production With the Partitioning, OLAP and Data Mining options SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for Linux: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production SQL> select * from scott.dept; DEPTNO DNAME LOC ---------- -------------- ------------- 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO 40 OPERATIONS BOSTON SQL> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production With the Partitioning, OLAP and Data Mining options [oracle@h1 ~]$ 查看共享盘上的状态: [oracle@h1 ~]$ cd /oradata/ [oracle@h1 oradata]$ ll total 24 drwxr-x--- 3 oracle oinstall 4096 Dec 16 20:18 flash_recovery_area drwx------ 2 oracle oinstall 16384 Dec 16 15:13 lost+found drwxr-x--- 2 oracle oinstall 4096 Dec 16 20:21 orcl [oracle@h1 oradata]$ tree * flash_recovery_area `-- ORCL `-- onlinelog lost+found orcl |-- control01.ctl |-- control02.ctl |-- control03.ctl |-- example01.dbf |-- redo01.log |-- redo02.log |-- redo03.log |-- sysaux01.dbf |-- system01.dbf |-- temp01.dbf |-- undotbs01.dbf `-- users01.dbf 2 directories, 12 files [oracle@h1 oradata]$ |
——————
监听器配置:NETCA
当前系统中是没有配置监听器的:
1 2 3 4 5 6 7 8 9 10 |
[oracle@h1 oradata]$ ps -ef | grep lsnr oracle 19662 7467 0 21:14 pts/3 00:00:00 grep lsnr [oracle@h1 oradata]$ [oracle@h1 oradata]$ [oracle@h1 oradata]$ cd $ORACLE_HOME/network/admin [oracle@h1 admin]$ ll total 8 drwxr-x--- 2 oracle oinstall 4096 Dec 16 15:02 samples -rw-r----- 1 oracle oinstall 172 Dec 26 2003 shrept.lst [oracle@h1 admin]$ |
由于做了Heartbeat,对外访问的时候应该提供的是浮动IP,也就是上面/etc/hosts中的VIP,这样,不管哪个节点宕机,VIP都是活动的。
所以,在设置监听器的时候,也应该把监听开在浮动IP上,而不是与节点机相关的IP。
下面开始操作:
调用netca:
1 2 3 4 5 6 7 8 |
[oracle@h1 admin]$ netca netca netca_deinst.sh [oracle@h1 admin]$ netca & [1] 1573 [oracle@h1 admin]$ Oracle Net Services Configuration: [oracle@h1 admin]$ |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
[oracle@h1 admin]$ Configuring Listener:LISTENER Default local naming configuration complete. Listener configuration complete. Oracle Net Listener Startup: Running Listener Control: /u01/app/product/10.2.0/db_1/bin/lsnrctl start LISTENER Listener Control complete. Listener started successfully. Oracle Net Services configuration successful. The exit code is 0 [1]+ Done netca [oracle@h1 admin]$ [oracle@h1 admin]$ pwd /u01/app/product/10.2.0/db_1/network/admin [oracle@h1 admin]$ ll total 16 -rw-r--r-- 1 oracle oinstall 484 Dec 16 21:22 listener.ora drwxr-x--- 2 oracle oinstall 4096 Dec 16 15:02 samples -rw-r----- 1 oracle oinstall 172 Dec 26 2003 shrept.lst -rw-r--r-- 1 oracle oinstall 345 Dec 16 21:22 tnsnames.ora [oracle@h1 admin]$ cat listener.ora # listener.ora Network Configuration File: /u01/app/product/10.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/product/10.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = h1)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) ) [oracle@h1 admin]$ [oracle@h1 admin]$ cat tnsnames.ora # tnsnames.ora Network Configuration File: /u01/app/product/10.2.0/db_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. EXTPROC_CONNECTION_DATA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) (CONNECT_DATA = (SID = PLSExtProc) (PRESENTATION = RO) ) ) [oracle@h1 admin]$ |
修改listener.ora:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
[oracle@h1 admin]$ vi listener.ora [oracle@h1 admin]$ cat listener.ora # listener.ora Network Configuration File: /u01/app/product/10.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/product/10.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = # (ADDRESS = (PROTOCOL = TCP)(HOST = h1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.150)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) ) [oracle@h1 admin]$ |
重启监听,应用配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
[oracle@h1 admin]$ lsnrctl status LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 16-DEC-2015 21:27:23 Copyright (c) 1991, 2005, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.150)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production Start Date 16-DEC-2015 21:22:53 Uptime 0 days 0 hr. 4 min. 30 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/product/10.2.0/db_1/network/admin/listener.ora Listener Log File /u01/app/product/10.2.0/db_1/network/log/listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=h1)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC0))) Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Service "orcl" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orcl_XPT" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@h1 admin]$ [oracle@h1 admin]$ lsnrctl stop LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 16-DEC-2015 21:27:27 Copyright (c) 1991, 2005, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.150)(PORT=1521))) The command completed successfully [oracle@h1 admin]$ [oracle@h1 admin]$ ps -ef | grep lsnr oracle 17626 7467 0 21:27 pts/3 00:00:00 grep lsnr [oracle@h1 admin]$ [oracle@h1 admin]$ lsnrctl start LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 16-DEC-2015 21:27:44 Copyright (c) 1991, 2005, Oracle. All rights reserved. Starting /u01/app/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/product/10.2.0/db_1/network/admin/listener.ora Log messages written to /u01/app/product/10.2.0/db_1/network/log/listener.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.150)(PORT=1521))) Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC0))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.150)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production Start Date 16-DEC-2015 21:27:44 Uptime 0 days 0 hr. 0 min. 0 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/product/10.2.0/db_1/network/admin/listener.ora Listener Log File /u01/app/product/10.2.0/db_1/network/log/listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.150)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC0))) Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully [oracle@h1 admin]$ |
至此,数据库的创建与监听器的配置初步完成。
至此为止,数据库在节点一上的运行没有问题。
但是为了让Heartbeat可以对Oracle数据库服务提供高可用支持,我们还需要配置Heartbeat对Oracle数据库的监听器与实例的启停脚本与启停逻辑,以便于自动化运行。
——————
让Heartbeat与Oracle配合起来。
数据库实例:参数LOCAL_LISTENER(当前实例在节点一运行,所以这个操作在节点一执行)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
[oracle@h1 admin]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Wed Dec 16 21:54:34 2015 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production With the Partitioning, OLAP and Data Mining options SQL> show parameter local_listener NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ local_listener string SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.150)(PORT=1521)))' scope=both; System altered. SQL> SQL> show parameter local_listener NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ local_listener string (DESCRIPTION=(ADDRESS=(PROTOCO L=TCP)(HOST=192.168.56.150)(PO RT=1521))) SQL> SQL> alter system register; System altered. SQL> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production With the Partitioning, OLAP and Data Mining options [oracle@h1 admin]$ |
为了防止因为初始化参数的异常而导致数据库起不来,这里对数据库的启动参数做一个备份:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
SQL> show parameter spfile; NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /u01/app/product/10.2.0/db_1/d bs/spfileorcl.ora SQL> create pfile='/home/oracle/init_orcl_local-listener.ora' from spfile; File created. SQL> !ls /home/oracle -ltr total 4 -rw-r--r-- 1 oracle oinstall 1007 Dec 16 21:59 init_orcl_local-listener.ora SQL> |
需要同步的文件与路径:
1. Oracle日志目录:$ORACLE_BASE/admin
2. $ORACLE_HOME/dbs
3. $ORACLE_HOME/network/admin
Oracle日志目录:
原路径(节点一):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
[oracle@h1 admin]$ pwd /u01/app/product/10.2.0/db_1/network/admin [oracle@h1 admin]$ cd $ORACLE_BASE [oracle@h1 app]$ ls admin oradata oraInventory product [oracle@h1 app]$ tree admin/ admin/ `-- orcl |-- adump | |-- ora_10270.aud | |-- ora_10311.aud | |-- ora_10504.aud | |-- ora_10523.aud | |-- ora_10932.aud | |-- ora_10993.aud | |-- ora_11010.aud | |-- ora_11011.aud | |-- ora_11288.aud | |-- ora_11349.aud | |-- ora_11350.aud | |-- ora_13706.aud | |-- ora_13750.aud | |-- ora_1393.aud | |-- ora_13968.aud | |-- ora_1398.aud | |-- ora_2363.aud | |-- ora_24831.aud | |-- ora_26264.aud | |-- ora_30004.aud | |-- ora_30553.aud | |-- ora_30595.aud | |-- ora_6656.aud | |-- ora_6879.aud | |-- ora_7724.aud | |-- ora_8915.aud | `-- ora_8944.aud |-- bdump | |-- alert_orcl.log | |-- orcl_lgwr_10284.trc | |-- orcl_lgwr_13725.trc | `-- orcl_lgwr_30572.trc |-- cdump |-- dpdump |-- pfile | `-- init.ora.11162015202751 `-- udump |-- orcl_ora_10270.trc |-- orcl_ora_10311.trc |-- orcl_ora_11350.trc |-- orcl_ora_13706.trc |-- orcl_ora_13750.trc |-- orcl_ora_2363.trc |-- orcl_ora_24831.trc |-- orcl_ora_30004.trc |-- orcl_ora_30553.trc |-- orcl_ora_30595.trc |-- orcl_ora_6879.trc |-- orcl_ora_8186.trc |-- orcl_ora_8915.trc `-- orcl_ora_8944.trc 7 directories, 46 files [oracle@h1 app]$ |
同步到节点二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
同步前,节点二的状态: [root@h2 ha.d]# su - oracle [oracle@h2 ~]$ cd $ORACLE_BASE [oracle@h2 app]$ ll total 8 drwxrwx--- 6 oracle oinstall 4096 Dec 16 15:08 oraInventory drwxr-xr-x 3 oracle oinstall 4096 Dec 16 14:15 product [oracle@h2 app]$ 节点一,开始同步: [oracle@h1 app]$ pwd /u01/app [oracle@h1 app]$ ll total 16 drwxr-x--- 3 oracle oinstall 4096 Dec 16 20:10 admin drwxr-x--- 2 oracle oinstall 4096 Dec 16 20:10 oradata drwxrwx--- 6 oracle oinstall 4096 Dec 16 15:06 oraInventory drwxr-xr-x 3 oracle oinstall 4096 Dec 16 14:14 product [oracle@h1 app]$ scp -r admin/ h2:$ORACLE_BASE oracle@h2's password: orcl_lgwr_13725.trc 100% 670 0.7KB/s 00:00 alert_orcl.log 100% 23KB 23.5KB/s 00:00 orcl_lgwr_30572.trc 100% 698 0.7KB/s 00:00 orcl_lgwr_10284.trc 100% 757 0.7KB/s 00:00 ora_30553.aud 100% 619 0.6KB/s 00:00 ora_10523.aud 100% 617 0.6KB/s 00:00 ora_10270.aud 100% 619 0.6KB/s 00:00 ora_6656.aud 100% 610 0.6KB/s 00:00 ora_11010.aud 100% 612 0.6KB/s 00:00 ora_8944.aud 100% 742 0.7KB/s 00:00 ora_10504.aud 100% 612 0.6KB/s 00:00 ora_7724.aud 100% 610 0.6KB/s 00:00 ora_10993.aud 100% 612 0.6KB/s 00:00 ora_13750.aud 100% 619 0.6KB/s 00:00 ora_11011.aud 100% 612 0.6KB/s 00:00 ora_24831.aud 100% 619 0.6KB/s 00:00 ora_10932.aud 100% 612 0.6KB/s 00:00 ora_1398.aud 100% 610 0.6KB/s 00:00 ora_26264.aud 100% 619 0.6KB/s 00:00 ora_10311.aud 100% 619 0.6KB/s 00:00 ora_11350.aud 100% 744 0.7KB/s 00:00 ora_30595.aud 100% 619 0.6KB/s 00:00 ora_8915.aud 100% 610 0.6KB/s 00:00 ora_13968.aud 100% 612 0.6KB/s 00:00 ora_11288.aud 100% 612 0.6KB/s 00:00 ora_6879.aud 100% 610 0.6KB/s 00:00 ora_1393.aud 100% 610 0.6KB/s 00:00 ora_13706.aud 100% 619 0.6KB/s 00:00 ora_30004.aud 100% 751 0.7KB/s 00:00 ora_11349.aud 100% 612 0.6KB/s 00:00 ora_2363.aud 100% 610 0.6KB/s 00:00 orcl_ora_6879.trc 100% 5418 5.3KB/s 00:00 orcl_ora_13750.trc 100% 647 0.6KB/s 00:01 orcl_ora_24831.trc 100% 618 0.6KB/s 00:00 orcl_ora_30553.trc 100% 590 0.6KB/s 00:00 orcl_ora_30004.trc 100% 934 0.9KB/s 00:00 orcl_ora_11350.trc 100% 776 0.8KB/s 00:00 orcl_ora_10311.trc 100% 647 0.6KB/s 00:00 orcl_ora_10270.trc 100% 590 0.6KB/s 00:00 orcl_ora_8186.trc 100% 726 0.7KB/s 00:00 orcl_ora_8915.trc 100% 1133 1.1KB/s 00:00 orcl_ora_8944.trc 100% 774 0.8KB/s 00:00 orcl_ora_2363.trc 100% 810 0.8KB/s 00:00 orcl_ora_30595.trc 100% 2717 2.7KB/s 00:00 orcl_ora_13706.trc 100% 590 0.6KB/s 00:00 init.ora.11162015202751 100% 2394 2.3KB/s 00:00 [oracle@h1 app]$ 同步后,节点二的状态: [oracle@h2 app]$ pwd /u01/app [oracle@h2 app]$ ll total 12 drwxr-x--- 3 oracle oinstall 4096 Dec 16 21:46 admin drwxrwx--- 6 oracle oinstall 4096 Dec 16 15:08 oraInventory drwxr-xr-x 3 oracle oinstall 4096 Dec 16 14:15 product [oracle@h2 app]$ [oracle@h2 app]$ tree admin/ admin/ `-- orcl |-- adump | |-- ora_10270.aud | |-- ora_10311.aud | |-- ora_10504.aud | |-- ora_10523.aud | |-- ora_10932.aud | |-- ora_10993.aud | |-- ora_11010.aud | |-- ora_11011.aud | |-- ora_11288.aud | |-- ora_11349.aud | |-- ora_11350.aud | |-- ora_13706.aud | |-- ora_13750.aud | |-- ora_1393.aud | |-- ora_13968.aud | |-- ora_1398.aud | |-- ora_2363.aud | |-- ora_24831.aud | |-- ora_26264.aud | |-- ora_30004.aud | |-- ora_30553.aud | |-- ora_30595.aud | |-- ora_6656.aud | |-- ora_6879.aud | |-- ora_7724.aud | |-- ora_8915.aud | `-- ora_8944.aud |-- bdump | |-- alert_orcl.log | |-- orcl_lgwr_10284.trc | |-- orcl_lgwr_13725.trc | `-- orcl_lgwr_30572.trc |-- cdump |-- dpdump |-- pfile | `-- init.ora.11162015202751 `-- udump |-- orcl_ora_10270.trc |-- orcl_ora_10311.trc |-- orcl_ora_11350.trc |-- orcl_ora_13706.trc |-- orcl_ora_13750.trc |-- orcl_ora_2363.trc |-- orcl_ora_24831.trc |-- orcl_ora_30004.trc |-- orcl_ora_30553.trc |-- orcl_ora_30595.trc |-- orcl_ora_6879.trc |-- orcl_ora_8186.trc |-- orcl_ora_8915.trc `-- orcl_ora_8944.trc 7 directories, 46 files [oracle@h2 app]$ |
$ORACLE_HOME/dbs:
原路径,节点一:
1 2 3 4 5 6 7 8 9 10 |
[oracle@h1 app]$ cd $ORACLE_HOME/dbs [oracle@h1 dbs]$ ll total 44 -rw-rw---- 1 oracle oinstall 1552 Dec 16 20:29 hc_orcl.dat -rw-r----- 1 oracle oinstall 12920 May 3 2001 initdw.ora -rw-r----- 1 oracle oinstall 8385 Sep 11 1998 init.ora -rw-rw---- 1 oracle oinstall 24 Dec 16 20:17 lkORCL -rw-r----- 1 oracle oinstall 1536 Dec 16 20:27 orapworcl -rw-r----- 1 oracle oinstall 2560 Dec 16 20:29 spfileorcl.ora [oracle@h1 dbs]$ |
同步到节点二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
同步前,节点二的状态: [oracle@h2 app]$ cd $ORACLE_HOME/dbs [oracle@h2 dbs]$ ll total 28 -rw-r----- 1 oracle oinstall 12920 May 3 2001 initdw.ora -rw-r----- 1 oracle oinstall 8385 Sep 11 1998 init.ora [oracle@h2 dbs]$ 开始同步: [oracle@h1 dbs]$ scp * h2:$ORACLE_HOME/dbs oracle@h2's password: hc_orcl.dat 100% 1552 1.5KB/s 00:00 initdw.ora 100% 13KB 12.6KB/s 00:00 init.ora 100% 8385 8.2KB/s 00:00 lkORCL 100% 24 0.0KB/s 00:00 orapworcl 100% 1536 1.5KB/s 00:00 spfileorcl.ora 100% 2560 2.5KB/s 00:00 [oracle@h1 dbs]$ 同步完成后,节点二的状态: [oracle@h2 dbs]$ pwd /u01/app/product/10.2.0/db_1/dbs [oracle@h2 dbs]$ [oracle@h2 dbs]$ ll total 44 -rw-r----- 1 oracle oinstall 1552 Dec 16 21:50 hc_orcl.dat -rw-r----- 1 oracle oinstall 12920 Dec 16 21:50 initdw.ora -rw-r----- 1 oracle oinstall 8385 Dec 16 21:50 init.ora -rw-r----- 1 oracle oinstall 24 Dec 16 21:50 lkORCL -rw-r----- 1 oracle oinstall 1536 Dec 16 21:50 orapworcl -rw-r----- 1 oracle oinstall 2560 Dec 16 21:50 spfileorcl.ora [oracle@h2 dbs]$ |
$ORACLE_HOME/network/admin:
源目录,节点一的状态:
1 2 3 4 5 6 7 8 |
[oracle@h1 dbs]$ cd $ORACLE_HOME/network/admin [oracle@h1 admin]$ ll total 16 -rw-r--r-- 1 oracle oinstall 555 Dec 16 21:26 listener.ora drwxr-x--- 2 oracle oinstall 4096 Dec 16 15:02 samples -rw-r----- 1 oracle oinstall 172 Dec 26 2003 shrept.lst -rw-r--r-- 1 oracle oinstall 345 Dec 16 21:22 tnsnames.ora [oracle@h1 admin]$ |
同步到节点二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
同步前,节点二的状态: [oracle@h2 dbs]$ cd $ORACLE_HOME/network/admin [oracle@h2 admin]$ ll total 8 drwxr-x--- 2 oracle oinstall 4096 Dec 16 15:05 samples -rw-r----- 1 oracle oinstall 172 Dec 26 2003 shrept.lst [oracle@h2 admin]$ 开始同步: [oracle@h1 admin]$ scp -r * h2:$ORACLE_HOME/network/admin oracle@h2's password: listener.ora 100% 555 0.5KB/s 00:00 tnsnames.ora 100% 2939 2.9KB/s 00:00 sqlnet.ora 100% 30KB 30.3KB/s 00:01 listener.ora 100% 3867 3.8KB/s 00:00 shrept.lst 100% 172 0.2KB/s 00:00 tnsnames.ora 100% 345 0.3KB/s 00:00 [oracle@h1 admin]$   |