MongoDB: 一主一副本一仲裁集群 Install on RHEL7.6
MongoDB的集群,主要有三种部署方式:
1. Replica Set 副本
2. Sharding 分片
3. Master / Slave 主备
Master / Slave # 已经不推荐了的
这种方式其实不能说是集群方式,只是主备方式,并且,官方已经不推荐这种方式了;
Replica Set # 部署方式相对简单
需要:
主节点(M)
备节点(S)
仲裁节点(A:Arbiter)

如上所示:
1. 主节点提供所有增删查改服务,备节点不提供任何服务
2. 可以通过设置Read-Perference-Modes(http://docs.mongodb.org/manual/applications/replication/#read-preference-modes)使备节点提供查询服务,这样就可以减少主节点的压力,当客户端进行数据查询时,请求自动转到备节点上
3. 主备节点存储数据,仲裁节点不存储数据
4. 客户端同时连接主节点与备节点,不连接仲裁节点
5. 仲裁节点是一个特殊的节点,它本身不存储数据,他的主要作用是:在主节点挂了之后,【决定】哪个备节点提升为主节点;因此,客户端不需要连接仲裁节点
Sharding # 三种部署方式中最复杂的
和 Replica Set 一样,它也需要:
仲裁节点;
与 Replica Set 不同,它还需要:
配置节点;Config Servers
路由节点;Router

本文演示【Replica Set】方式的MongoDB集群。
环境介绍:
本文档会涉及三台服务器:
节点一:192.168.126.141 # 主
节点二:192.168.126.142 # 备
节点三:192.168.126.143 # 仲裁
它们都是:
1. RHEL 7.6
2. 网卡IP静态绑定
3. 防火墙:关闭并禁用开机启动
4. SELinux:禁用
操作系统配置(所有节点)
Linux系统全局允许分配最大文件句柄:
1 2 3 4 5 6 |
[root@tidb1 ~]# sysctl -w fs.file-max=2097152 fs.file-max = 2097152 [root@tidb1 ~]# sysctl -w fs.nr_open=2097152 fs.nr_open = 2097152 [root@tidb1 ~]# echo 2097152 > /proc/sys/fs/nr_open [root@tidb1 ~]# |
当前进程或会话允许打开的文件句柄:
1 2 |
[root@tidb1 ~]# ulimit -n 1048576 [root@tidb1 ~]# |
将进程或会话允许打开的文件句柄的配置写到【/etc/sysctl.conf】
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
[root@tidb1 ~]# cat /etc/sysctl.conf # sysctl settings are defined through files in # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/. # # Vendors settings live in /usr/lib/sysctl.d/. # To override a whole file, create a new file with the same in # /etc/sysctl.d/ and put new settings there. To override # only specific settings, add a file with a lexically later # name in /etc/sysctl.d/ and put new settings there. # # For more information, see sysctl.conf(5) and sysctl.d(5). [root@tidb1 ~]# [root@tidb1 ~]# ls -ltr /etc/sysctl.d/ total 0 lrwxrwxrwx. 1 root root 14 Aug 5 05:59 99-sysctl.conf -> ../sysctl.conf [root@tidb1 ~]# [root@tidb1 ~]# vi /etc/sysctl.d/70-mongodb.conf [root@tidb1 ~]# [root@tidb1 ~]# cat /etc/sysctl.d/70-mongodb.conf # for MongoDB fs.file-max = 1048576 [root@tidb1 ~]# [root@tidb1 ~]# sysctl -a | grep file-max fs.file-max = 2097152 sysctl: reading key "net.ipv6.conf.all.stable_secret" sysctl: reading key "net.ipv6.conf.default.stable_secret" sysctl: reading key "net.ipv6.conf.ens32.stable_secret" sysctl: reading key "net.ipv6.conf.lo.stable_secret" [root@tidb1 ~]# [root@tidb1 ~]# cd /etc/sysctl.d/ [root@tidb1 sysctl.d]# [root@tidb1 sysctl.d]# sysctl --load 70-mongodb.conf fs.file-max = 1048576 [root@tidb1 sysctl.d]# [root@tidb1 sysctl.d]# sysctl -a | grep file-max fs.file-max = 1048576 sysctl: reading key "net.ipv6.conf.all.stable_secret" sysctl: reading key "net.ipv6.conf.default.stable_secret" sysctl: reading key "net.ipv6.conf.ens32.stable_secret" sysctl: reading key "net.ipv6.conf.lo.stable_secret" [root@tidb1 sysctl.d]# |
文件【/etc/security/limits.conf】
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
[root@tidb1 sysctl.d]# vi /etc/security/limits.conf [root@tidb1 sysctl.d]# [root@tidb1 sysctl.d]# cat /etc/security/limits.conf # /etc/security/limits.conf # #This file sets the resource limits for the users logged in via PAM. #It does not affect resource limits of the system services. # #Also note that configuration files in /etc/security/limits.d directory, #which are read in alphabetical order, override the settings in this #file in case the domain is the same or more specific. #That means for example that setting a limit for wildcard domain here #can be overriden with a wildcard setting in a config file in the #subdirectory, but a user specific setting here can be overriden only #with a user specific setting in the subdirectory. # #Each line describes a limit for a user in the form: # #<domain> <type> <item> <value> # #Where: #<domain> can be: # - a user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # #<type> can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # #<item> can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open file descriptors # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit (KB) # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to values: [-20, 19] # - rtprio - max realtime priority # #<domain> <type> <item> <value> # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file # For MongoDB * soft nofile 1048576 * hard nofile 1048576 * soft nproc 524288 * hard nproc 524288 [root@tidb1 sysctl.d]# [root@tidb1 sysctl.d]# scp /etc/security/limits.conf tidb2:/etc/security/ root@tidb2's password: limits.conf 100% 2522 543.4KB/s 00:00 [root@tidb1 sysctl.d]# [root@tidb1 sysctl.d]# scp /etc/security/limits.conf tidb3:/etc/security/ root@tidb3's password: limits.conf 100% 2522 1.1MB/s 00:00 [root@tidb1 sysctl.d]# |
TCP协议栈参数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
并发连接backlog: [root@tidb1 ~]# sysctl -w net.core.somaxconn=32768 net.core.somaxconn = 32768 [root@tidb1 ~]# sysctl -w net.ipv4.tcp_max_syn_backlog=16384 net.ipv4.tcp_max_syn_backlog = 16384 [root@tidb1 ~]# sysctl -w net.core.netdev_max_backlog=16384 net.core.netdev_max_backlog = 16384 [root@tidb1 ~]# 可用端口范围: [root@tidb1 ~]# sysctl -w net.ipv4.ip_local_port_range='80 65535' net.ipv4.ip_local_port_range = 80 65535 [root@tidb1 ~]# TCP Socket读写Buffer: [root@tidb1 ~]# sysctl -w net.core.rmem_default=262144 net.core.rmem_default = 262144 [root@tidb1 ~]# sysctl -w net.core.wmem_default=262144 net.core.wmem_default = 262144 [root@tidb1 ~]# sysctl -w net.core.rmem_max=16777216 net.core.rmem_max = 16777216 [root@tidb1 ~]# sysctl -w net.core.wmem_max=16777216 net.core.wmem_max = 16777216 [root@tidb1 ~]# sysctl -w net.core.optmem_max=16777216 net.core.optmem_max = 16777216 [root@tidb1 ~]# sysctl -w net.ipv4.tcp_rmem='1024 4096 16777216' net.ipv4.tcp_rmem = 1024 4096 16777216 [root@tidb1 ~]# sysctl -w net.ipv4.tcp_wmem='1024 4096 16777216' net.ipv4.tcp_wmem = 1024 4096 16777216 [root@tidb1 ~]# 对于RHEL7以下的版本适用,RHEL7以上版本可以忽略: TCP连接追踪 sysctl -w net.nf_conntrack_max=1000000 sysctl -w net.netfilter.nf_conntrack_max=1000000 sysctl -w net.netfilter.nf_conntrack_tcp_timeout_time_wait=30 |
安装MongoDB软件(三个节点都需要操作)
在官方网站下载安装介质:
https://www.mongodb.com/try/download/community

如上,在【Available Downloads】中,选择你要的版本、平台、包,然后就可以下载到安装介质了。
安装介质其实都可以在这个路径找到:
https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.6/x86_64/RPMS/
在这里,我选择的是RHEL 7上的安装介质;
将安装介质上传服务器:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[root@tidb1 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) [root@tidb1 ~]# [root@tidb1 ~]# ls -ltr /software total 11788 drwxr-xr-x 3 root root 4096 Aug 10 06:43 Percona_MySQL_Distribute_EL7 -rw-r--r-- 1 root root 12062720 Aug 24 02:24 haproxy-2.2.2.tar.gz drwxrwxr-x 11 root root 4096 Aug 25 18:37 haproxy-2.2.2 [root@tidb1 ~]# [root@tidb1 ~]# ls -ltr /software total 32840 drwxr-xr-x 3 root root 4096 Aug 10 06:43 Percona_MySQL_Distribute_EL7 -rw-r--r-- 1 root root 12062720 Aug 24 02:24 haproxy-2.2.2.tar.gz drwxrwxr-x 11 root root 4096 Aug 25 18:37 haproxy-2.2.2 -rw-r--r-- 1 root root 21555816 Aug 28 01:48 mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm [root@tidb1 ~]# [root@tidb1 ~]# du -sh /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm 21M /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm [root@tidb1 ~]# |
通过命令【scp】将安装介质发送到其他两个节点:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
[root@tidb1 ~]# cat /etc/hosts #127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # Local 127.0.0.1 localhost localhost.localdomain # Public 192.168.126.141 tidb1 192.168.126.142 tidb2 192.168.126.143 tidb3 # Finished. [root@tidb1 ~]# [root@tidb1 ~]# scp /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm tidb2:/software root@tidb2's password: mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm 100% 21MB 40.8MB/s 00:00 [root@tidb1 ~]# [root@tidb1 ~]# scp /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm tidb3:/software root@tidb3's password: mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm 100% 21MB 39.1MB/s 00:00 [root@tidb1 ~]# |
安装:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
安装: [root@tidb1 ~]# rpm -ivh /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm warning: /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 91fa4ad5: NOKEY Preparing... ################################# [100%] Updating / installing... 1:mongodb-org-server-3.6.20-0.1.rc0################################# [100%] Created symlink from /etc/systemd/system/multi-user.target.wants/mongod.service to /usr/lib/systemd/system/mongod.service. [root@tidb1 ~]# [root@tidb1 ~]# ssh tidb2 "rpm -ivh /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm" root@tidb2's password: warning: /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 91fa4ad5: NOKEY Preparing... ######################################## Updating / installing... mongodb-org-server-3.6.20-0.1.rc0.el7 ######################################## Created symlink from /etc/systemd/system/multi-user.target.wants/mongod.service to /usr/lib/systemd/system/mongod.service. [root@tidb1 ~]# [root@tidb1 ~]# ssh tidb3 "rpm -ivh /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm" root@tidb3's password: warning: /software/mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 91fa4ad5: NOKEY Preparing... ######################################## Updating / installing... mongodb-org-server-3.6.20-0.1.rc0.el7 ######################################## Created symlink from /etc/systemd/system/multi-user.target.wants/mongod.service to /usr/lib/systemd/system/mongod.service. [root@tidb1 ~]# 检验: [root@tidb1 ~]# rpm -qa | grep mongo mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64 [root@tidb1 ~]# [root@tidb1 ~]# ssh tidb2 "rpm -qa | grep mongo" root@tidb2's password: mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64 [root@tidb1 ~]# [root@tidb1 ~]# ssh tidb3 "rpm -qa | grep mongo" root@tidb3's password: mongodb-org-server-3.6.20-0.1.rc0.el7.x86_64 [root@tidb1 ~]# |
创建MongoDB的相关目录(每个节点的操作不一样)
通过命令【mkdir -p xxxx】执行对相关目录的创建:
日志目录(所有节点):
1 2 |
[root@tidb1 ~]# mkdir /mongodb_data/log -p [root@tidb1 ~]# |
数据目录:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
节点一: [root@tidb1 ~]# mkdir /mongodb_data/master -p [root@tidb1 ~]# ls -ltr / | grep mongo drwxr-xr-x 3 root root 20 Aug 28 02:41 mongodb_data [root@tidb1 ~]# [root@tidb1 ~]# tree /mongodb_data /mongodb_data ├── log └── master 2 directories, 0 files [root@tidb1 ~]# 1 directory, 0 files [root@tidb1 ~]# 节点二: [root@tidb2 ~]# mkdir /mongodb_data/slave -p [root@tidb2 ~]# ls -ltr / | grep mongo drwxr-xr-x 3 root root 19 Aug 28 17:41 mongodb_data [root@tidb2 ~]# [root@tidb2 ~]# tree /mongodb_data /mongodb_data ├── log └── slave 2 directories, 0 files [root@tidb2 ~]# 节点三: [root@tidb3 ~]# mkdir /mongodb_data/arbiter -p [root@tidb3 ~]# ls -ltr / | grep mongo drwxr-xr-x 3 root root 21 Aug 28 17:41 mongodb_data [root@tidb3 ~]# [root@tidb3 ~]# tree /mongodb_data /mongodb_data ├── arbiter └── log 2 directories, 0 files [root@tidb3 ~]# |
配置文件:
1 2 |
[root@tidb1 ~]# mkdir /mongodb_data/conf [root@tidb1 ~]# |
设置权限:
1 2 |
[root@tidb1 ~]# chown -R mongod.mongod /mongodb_data/ [root@tidb1 ~]# |
配置MongoDB(每个节点的操作不一样)

MongoDB的默认配置文件位于【/etc/mongod.conf】
1 2 3 |
[root@tidb1 ~]# ls -ltr /etc/ | grep mongo -rw-r--r-- 1 root root 804 Aug 25 14:10 mongod.conf [root@tidb1 ~]# |
因为三个节点是三个不同的角色,所以三个节点上的配置都不相同;
配置文件:节点一 / Master
配置文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@tidb1 conf]# pwd /mongodb_data/conf [root@tidb1 conf]# [root@tidb1 conf]# ls -ltr total 4 -rw-r--r-- 1 root root 229 Aug 28 06:11 master.conf [root@tidb1 conf]# [root@tidb1 conf]# cat master.conf dbpath=/mongodb_data/master logpath=/mongodb_data/log/master.log pidfilepath=/mongodb_data/master.pid directoryperdb=true logappend=true replSet=replme bind_ip=192.168.126.141 port=27017 oplogSize=10000 fork=true noprealloc=true [root@tidb1 conf]# |
配置文件:节点二 / Slave
配置文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@tidb2 conf]# pwd /mongodb_data/conf [root@tidb2 conf]# [root@tidb2 conf]# ls -ltr total 4 -rw-r--r-- 1 root root 226 Aug 28 21:11 slave.conf [root@tidb2 conf]# [root@tidb2 conf]# cat slave.conf dbpath=/mongodb_data/slave logpath=/mongodb_data/log/slave.log pidfilepath=/mongodb_data/slave.pid directoryperdb=true logappend=true replSet=replme bind_ip=192.168.126.142 port=27017 oplogSize=10000 fork=true noprealloc=true [root@tidb2 conf]# |
配置文件:节点三 / Arbiter
配置文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@tidb3 conf]# pwd /mongodb_data/conf [root@tidb3 conf]# [root@tidb3 conf]# ls -ltr total 4 -rw-r--r-- 1 root root 232 Aug 28 21:12 arbiter.conf [root@tidb3 conf]# [root@tidb3 conf]# cat arbiter.conf dbpath=/mongodb_data/arbiter logpath=/mongodb_data/log/arbiter.log pidfilepath=/mongodb_data/arbiter.pid directoryperdb=true logappend=true replSet=replme bind_ip=192.168.126.143 port=27017 oplogSize=10000 fork=true noprealloc=true [root@tidb3 conf]# |
启动节点
节点一
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
[root@tidb1 conf]# pwd /mongodb_data/conf [root@tidb1 conf]# [root@tidb1 conf]# ps -ef | grep mongo root 54792 54742 0 05:53 pts/3 00:00:00 tail -f /mongodb_data/log/master.log root 56236 41317 0 06:17 pts/0 00:00:00 grep --color=auto mongo [root@tidb1 conf]# [root@tidb1 conf]# netstat -tupln | grep mongo [root@tidb1 conf]# [root@tidb1 conf]# which mongod /usr/bin/mongod [root@tidb1 conf]# [root@tidb1 conf]# mongod --version db version v3.6.20-rc0 git version: d8c7c2c514fb8a44b7b2a731105582b66ed7b253 OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 allocator: tcmalloc modules: none build environment: distmod: rhel70 distarch: x86_64 target_arch: x86_64 [root@tidb1 conf]# [root@tidb1 conf]# mongod -f master.conf 2020-08-28T06:30:07.176-0700 I CONTROL [main] note: noprealloc may hurt performance in many applications about to fork child process, waiting until server is ready for connections. forked process: 56938 child process started successfully, parent exiting [root@tidb1 conf]# [root@tidb1 conf]# ps -ef | grep mongo root 54792 54742 0 05:53 pts/3 00:00:00 tail -f /mongodb_data/log/master.log root 56938 1 2 06:30 ? 00:00:01 mongod -f master.conf root 57011 41317 0 06:30 pts/0 00:00:00 grep --color=auto mongo [root@tidb1 conf]# [root@tidb1 conf]# netstat -tupln | grep mongo tcp 0 0 192.168.126.141:27017 0.0.0.0:* LISTEN 56938/mongod [root@tidb1 conf]# |
可以看到,服务启动成功。
这个过程中的日志:
/var/log/message
1 2 3 4 5 6 |
Aug 28 06:27:50 tidb1 abrt-hook-ccpp: Process 56800 (mongod) of user 0 killed by SIGABRT - dumping core Aug 28 06:27:50 tidb1 abrt-server: Package 'mongodb-org-server' isn't signed with proper key Aug 28 06:27:50 tidb1 abrt-server: 'post-create' on '/var/spool/abrt/ccpp-2020-08-28-06:27:50-56800' exited with 1 Aug 28 06:27:50 tidb1 abrt-server: Deleting problem directory '/var/spool/abrt/ccpp-2020-08-28-06:27:50-56800' Aug 28 06:30:01 tidb1 systemd: Started Session 892 of user root. |
mongodb日志:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
2020-08-28T06:30:07.181-0700 I CONTROL [main] ***** SERVER RESTARTED ***** 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] MongoDB starting : pid=56938 port=27017 dbpath=/mongodb_data/master 64-bit host=tidb1 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] db version v3.6.20-rc0 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] git version: d8c7c2c514fb8a44b7b2a731105582b66ed7b253 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] allocator: tcmalloc 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] modules: none 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] build environment: 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] distmod: rhel70 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] distarch: x86_64 2020-08-28T06:30:07.200-0700 I CONTROL [initandlisten] target_arch: x86_64 2020-08-28T06:30:07.201-0700 I CONTROL [initandlisten] options: { config: "master.conf", net: { bindIp: "192.168.126.141", port: 27017 }, processManagement: { fork: true, pidFilePath: "/mongodb_data/master.pid" }, replication: { oplogSizeMB: 10000, replSet: "replme" }, storage: { dbPath: "/mongodb_data/master", directoryPerDB: true, mmapv1: { preallocDataFiles: false } }, systemLog: { destination: "file", logAppend: true, path: "/mongodb_data/log/master.log" } } 2020-08-28T06:30:07.201-0700 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3389M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), 2020-08-28T06:30:08.233-0700 I STORAGE [initandlisten] WiredTiger message [1598621408:233473][56938:0x7f6c36c5ab80], txn-recover: Set global recovery timestamp: 0 2020-08-28T06:30:08.264-0700 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 31126 processes, 1048576 files. Number of processes should be at least 524288 : 0.5 times number of files. 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.268-0700 I STORAGE [initandlisten] createCollection: local.startup_log with no UUID. 2020-08-28T06:30:08.279-0700 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/mongodb_data/master/diagnostic.data' 2020-08-28T06:30:08.279-0700 I STORAGE [initandlisten] createCollection: local.me with no UUID. 2020-08-28T06:30:08.291-0700 I STORAGE [initandlisten] createCollection: local.replset.minvalid with no UUID. 2020-08-28T06:30:08.307-0700 I STORAGE [initandlisten] createCollection: local.replset.election with no UUID. 2020-08-28T06:30:08.313-0700 I REPL [initandlisten] Did not find local initialized voted for document at startup. 2020-08-28T06:30:08.313-0700 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one. 2020-08-28T06:30:08.313-0700 I STORAGE [initandlisten] createCollection: local.system.rollback.id with no UUID. 2020-08-28T06:30:08.327-0700 I REPL [initandlisten] Initialized the rollback ID to 1 2020-08-28T06:30:08.327-0700 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset 2020-08-28T06:30:08.328-0700 I NETWORK [initandlisten] listening via socket bound to 192.168.126.141 2020-08-28T06:30:08.328-0700 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-27017.sock 2020-08-28T06:30:08.328-0700 I NETWORK [initandlisten] waiting for connections on port 27017 2020-08-28T06:30:08.328-0700 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist |
节点二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
[root@tidb2 conf]# pwd /mongodb_data/conf [root@tidb2 conf]# [root@tidb2 conf]# ls -ltr total 4 -rw-r--r-- 1 root root 226 Aug 28 21:11 slave.conf [root@tidb2 conf]# [root@tidb2 conf]# cat slave.conf dbpath=/mongodb_data/slave logpath=/mongodb_data/log/slave.log pidfilepath=/mongodb_data/slave.pid directoryperdb=true logappend=true replSet=replme bind_ip=192.168.126.142 port=27017 oplogSize=10000 fork=true noprealloc=true [root@tidb2 conf]# [root@tidb2 conf]# ps -ef | grep mongo root 23597 23229 0 21:34 pts/0 00:00:00 grep --color=auto mongo [root@tidb2 conf]# [root@tidb2 conf]# netstat -tupln | grep mongo [root@tidb2 conf]# [root@tidb2 conf]# mongod -f slave.conf 2020-08-28T21:34:44.827+0800 I CONTROL [main] note: noprealloc may hurt performance in many applications about to fork child process, waiting until server is ready for connections. forked process: 23603 child process started successfully, parent exiting [root@tidb2 conf]# [root@tidb2 conf]# ps -ef | grep mongo root 23603 1 14 21:34 ? 00:00:01 mongod -f slave.conf root 23632 23229 0 21:34 pts/0 00:00:00 grep --color=auto mongo [root@tidb2 conf]# [root@tidb2 conf]# netstat -tupln | grep mongo tcp 0 0 192.168.126.142:27017 0.0.0.0:* LISTEN 23603/mongod [root@tidb2 conf]# |
可以看到,节点二的服务也成功启动了。
节点三:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
[root@tidb3 conf]# pwd /mongodb_data/conf [root@tidb3 conf]# [root@tidb3 conf]# ls -ltr total 4 -rw-r--r-- 1 root root 232 Aug 28 21:12 arbiter.conf [root@tidb3 conf]# [root@tidb3 conf]# cat arbiter.conf dbpath=/mongodb_data/arbiter logpath=/mongodb_data/log/arbiter.log pidfilepath=/mongodb_data/arbiter.pid directoryperdb=true logappend=true replSet=replme bind_ip=192.168.126.143 port=27017 oplogSize=10000 fork=true noprealloc=true [root@tidb3 conf]# [root@tidb3 conf]# ps -ef | grep mongo root 16835 16416 0 21:36 pts/0 00:00:00 grep --color=auto mongo [root@tidb3 conf]# [root@tidb3 conf]# netstat -tupln | grep mongo [root@tidb3 conf]# [root@tidb3 conf]# mongod -f arbiter.conf 2020-08-28T21:36:47.812+0800 I CONTROL [main] note: noprealloc may hurt performance in many applications about to fork child process, waiting until server is ready for connections. forked process: 16841 child process started successfully, parent exiting [root@tidb3 conf]# [root@tidb3 conf]# ps -ef | grep mongo root 16841 1 34 21:36 ? 00:00:01 mongod -f arbiter.conf root 16869 16416 0 21:36 pts/0 00:00:00 grep --color=auto mongo [root@tidb3 conf]# [root@tidb3 conf]# netstat -tupln | grep mongo tcp 0 0 192.168.126.143:27017 0.0.0.0:* LISTEN 16841/mongod [root@tidb3 conf]# |
节点三的Mongo服务也启动好了。
安装MongoDB Shell工具(所有节点安装)
在继续下一步前还需要安装shell工具,这样你才有命令【mongo】
1 2 3 4 5 6 7 8 9 10 11 12 |
[root@tidb1 software]# rpm -ivh mongodb-org-shell-3.6.20-0.1.rc0.el7.x86_64.rpm warning: mongodb-org-shell-3.6.20-0.1.rc0.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 91fa4ad5: NOKEY Preparing... ################################# [100%] Updating / installing... 1:mongodb-org-shell-3.6.20-0.1.rc0.################################# [100%] [root@tidb1 software]# [root@tidb1 software]# whereis mongo mongo: /usr/bin/mongo /usr/share/man/man1/mongo.1 [root@tidb1 software]# [root@tidb1 software]# rpm -qf /usr/bin/mongo mongodb-org-shell-3.6.20-0.1.rc0.el7.x86_64 [root@tidb1 software]# |
配置集群
通过Mongo连接到其中一个MongoDB:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[root@tidb1 ~]# mongo 192.168.126.141:27017 MongoDB shell version v3.6.20-rc0 connecting to: mongodb://192.168.126.141:27017/test?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("bb666395-1ee5-4db3-9520-f00a27413a05") } MongoDB server version: 3.6.20-rc0 Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see http://docs.mongodb.org/ Questions? Try the support group http://groups.google.com/group/mongodb-user Server has startup warnings: 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2020-08-28T06:30:08.264-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 31126 processes, 1048576 files. Number of processes should be at least 524288 : 0.5 times number of files. 2020-08-28T06:30:08.265-0700 I CONTROL [initandlisten] > |
接下来的配置在mongo命令行中执行。
切换到【admin】,然后配置集群的成员以及仲裁节点:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
> use admin switched to db admin > > cfg={_id:"replme",members:[{_id:0,host:"192.168.126.141:27017",priority:2},{_id:1,host:"192.168.126.142:27017",priority:1},{_id:2,host:"192.168.126.143:27017",arbiterOnly:true}]}; { "_id" : "replme", "members" : [ { "_id" : 0, "host" : "192.168.126.141:27017", "priority" : 2 }, { "_id" : 1, "host" : "192.168.126.142:27017", "priority" : 1 }, { "_id" : 2, "host" : "192.168.126.143:27017", "arbiterOnly" : true } ] } > |
查看一下当前的状态:
1 2 3 4 5 6 7 8 9 |
> rs.status() { "info" : "run rs.initiate(...) if not yet done for the set", "ok" : 0, "errmsg" : "no replset config has been received", "code" : 94, "codeName" : "NotYetInitialized" } > |
可以看到,提示说需要执行【rs.initiate()】
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
> rs.initiate(cfg) { "ok" : 1, "operationTime" : Timestamp(1598625974, 1), "$clusterTime" : { "clusterTime" : Timestamp(1598625974, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } replme:SECONDARY> 然后,再等一下,再回车,会发现提示符变成了,我们配置所期望的 replme:PRIMARY> |
再次查看集群状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
replme:PRIMARY> rs.status() { "set" : "replme", "date" : ISODate("2020-08-28T14:50:41.259Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1598626236, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1598626236, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1598626236, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1598626236, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "192.168.126.141:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 365, "optime" : { "ts" : Timestamp(1598626236, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2020-08-28T14:50:36Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1598625985, 1), "electionDate" : ISODate("2020-08-28T14:46:25Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.126.142:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 266, "optime" : { "ts" : Timestamp(1598626236, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1598626236, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2020-08-28T14:50:36Z"), "optimeDurableDate" : ISODate("2020-08-28T14:50:36Z"), "lastHeartbeat" : ISODate("2020-08-28T14:50:39.776Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T14:50:40.406Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.126.141:27017", "syncSourceHost" : "192.168.126.141:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.126.143:27017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 266, "lastHeartbeat" : ISODate("2020-08-28T14:50:39.776Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T14:50:39.304Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1598626236, 1), "$clusterTime" : { "clusterTime" : Timestamp(1598626236, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } replme:PRIMARY> replme:PRIMARY> rs.isMaster() { "hosts" : [ "192.168.126.141:27017", "192.168.126.142:27017" ], "arbiters" : [ "192.168.126.143:27017" ], "setName" : "replme", "setVersion" : 1, "ismaster" : true, "secondary" : false, "primary" : "192.168.126.141:27017", "me" : "192.168.126.141:27017", "electionId" : ObjectId("7fffffff0000000000000001"), "lastWrite" : { "opTime" : { "ts" : Timestamp(1598626496, 1), "t" : NumberLong(1) }, "lastWriteDate" : ISODate("2020-08-28T14:54:56Z"), "majorityOpTime" : { "ts" : Timestamp(1598626496, 1), "t" : NumberLong(1) }, "majorityWriteDate" : ISODate("2020-08-28T14:54:56Z") }, "maxBsonObjectSize" : 16777216, "maxMessageSizeBytes" : 48000000, "maxWriteBatchSize" : 100000, "localTime" : ISODate("2020-08-28T14:55:01.604Z"), "logicalSessionTimeoutMinutes" : 30, "minWireVersion" : 0, "maxWireVersion" : 6, "readOnly" : false, "ok" : 1, "operationTime" : Timestamp(1598626496, 1), "$clusterTime" : { "clusterTime" : Timestamp(1598626496, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } replme:PRIMARY> |
上面的【rs.initiate(cfg)】的操作只需要在其中一个节点上执行一次即可。
在上面的操作做完了以后,再看看其他的两个节点:
节点二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
[root@tidb2 ~]# mongo 192.168.126.142:27017 MongoDB shell version v3.6.20-rc0 connecting to: mongodb://192.168.126.142:27017/test?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("040d0e2b-0b5a-4cee-bb1b-d0baf1ac2fd0") } MongoDB server version: 3.6.20-rc0 Server has startup warnings: 2020-08-28T22:44:33.894+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 31761 processes, 1048576 files. Number of processes should be at least 524288 : 0.5 times number of files. 2020-08-28T22:44:33.895+0800 I CONTROL [initandlisten] replme:SECONDARY> replme:SECONDARY> rs.isMaster() { "hosts" : [ "192.168.126.141:27017", "192.168.126.142:27017" ], "arbiters" : [ "192.168.126.143:27017" ], "setName" : "replme", "setVersion" : 1, "ismaster" : false, "secondary" : true, "primary" : "192.168.126.141:27017", "me" : "192.168.126.142:27017", "lastWrite" : { "opTime" : { "ts" : Timestamp(1598626416, 1), "t" : NumberLong(1) }, "lastWriteDate" : ISODate("2020-08-28T14:53:36Z"), "majorityOpTime" : { "ts" : Timestamp(1598626416, 1), "t" : NumberLong(1) }, "majorityWriteDate" : ISODate("2020-08-28T14:53:36Z") }, "maxBsonObjectSize" : 16777216, "maxMessageSizeBytes" : 48000000, "maxWriteBatchSize" : 100000, "localTime" : ISODate("2020-08-28T14:53:39.038Z"), "logicalSessionTimeoutMinutes" : 30, "minWireVersion" : 0, "maxWireVersion" : 6, "readOnly" : false, "ok" : 1, "operationTime" : Timestamp(1598626416, 1), "$clusterTime" : { "clusterTime" : Timestamp(1598626416, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } replme:SECONDARY> |
节点三:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
[root@tidb3 ~]# mongo 192.168.126.143:27017 MongoDB shell version v3.6.20-rc0 connecting to: mongodb://192.168.126.143:27017/test?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("7f01cdaa-5aa2-4826-965d-c74fcf04ad8f") } MongoDB server version: 3.6.20-rc0 Server has startup warnings: 2020-08-28T22:44:33.865+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T22:44:33.866+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.867+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2020-08-28T22:44:33.867+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T22:44:33.867+0800 I CONTROL [initandlisten] 2020-08-28T22:44:33.867+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 31761 processes, 1048576 files. Number of processes should be at least 524288 : 0.5 times number of files. 2020-08-28T22:44:33.867+0800 I CONTROL [initandlisten] replme:ARBITER> replme:ARBITER> rs.isMaster() { "hosts" : [ "192.168.126.141:27017", "192.168.126.142:27017" ], "arbiters" : [ "192.168.126.143:27017" ], "setName" : "replme", "setVersion" : 1, "ismaster" : false, "secondary" : false, "primary" : "192.168.126.141:27017", "arbiterOnly" : true, "me" : "192.168.126.143:27017", "lastWrite" : { "opTime" : { "ts" : Timestamp(1598626556, 1), "t" : NumberLong(1) }, "lastWriteDate" : ISODate("2020-08-28T14:55:56Z"), "majorityOpTime" : { "ts" : Timestamp(1598626556, 1), "t" : NumberLong(1) }, "majorityWriteDate" : ISODate("2020-08-28T14:55:56Z") }, "maxBsonObjectSize" : 16777216, "maxMessageSizeBytes" : 48000000, "maxWriteBatchSize" : 100000, "localTime" : ISODate("2020-08-28T14:56:01.999Z"), "minWireVersion" : 0, "maxWireVersion" : 6, "readOnly" : false, "ok" : 1 } replme:ARBITER> |
通过以上三个节点的mongo命令行的提示符,可以清楚的看到它们各自的节点角色。
测试:数据同步
节点一:插入一条数据:
1 2 3 4 5 6 7 8 9 |
replme:PRIMARY> use test switched to db test replme:PRIMARY> replme:PRIMARY> db.user.save({name:"Lucifer", age:832}) WriteResult({ "nInserted" : 1 }) replme:PRIMARY> replme:PRIMARY> db.user.find() { "_id" : ObjectId("5f491b84024e3ec2585da6bb"), "name" : "Lucifer", "age" : 832 } replme:PRIMARY> |
节点二:查找:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
replme:SECONDARY> rs.printReplicationInfo() configured oplog size: 10000MB log length start to end: 742secs (0.21hrs) oplog first event time: Fri Aug 28 2020 22:46:14 GMT+0800 (CST) oplog last event time: Fri Aug 28 2020 22:58:36 GMT+0800 (CST) now: Fri Aug 28 2020 22:58:39 GMT+0800 (CST) replme:SECONDARY> replme:SECONDARY> use test switched to db test replme:SECONDARY> replme:SECONDARY> db.user.find() { "_id" : ObjectId("5f491b84024e3ec2585da6bb"), "name" : "Lucifer", "age" : 832 } replme:SECONDARY> |
模拟:故障
节点一:
杀掉mongod的进程,模拟故障
1 2 3 4 5 6 7 8 9 10 11 |
[root@tidb1 ~]# ps -ef | grep mongo root 60807 1 1 07:44 ? 00:00:11 mongod -f /mongodb_data/conf/master.conf root 61800 61739 0 08:01 pts/1 00:00:00 tail -f /mongodb_data/log/master.log root 61848 41317 0 08:02 pts/0 00:00:00 grep --color=auto mongo [root@tidb1 ~]# [root@tidb1 ~]# kill -9 60807 [root@tidb1 ~]# [root@tidb1 ~]# ps -ef | grep mongo root 61800 61739 0 08:01 pts/1 00:00:00 tail -f /mongodb_data/log/master.log root 61869 41317 0 08:03 pts/0 00:00:00 grep --color=auto mongo [root@tidb1 ~]# |
节点一:杀掉进程后,节点二的日志就检测到了异常:
1 2 3 4 5 6 7 8 9 10 11 12 |
其中一类异常日志: 2020-08-28T23:03:12.199+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to 192.168.126.141:27017 2020-08-28T23:03:12.203+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to 192.168.126.141:27017 - HostUnreachable: Connection refused 2020-08-28T23:03:12.203+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Dropping all pooled connections to 192.168.126.141:27017 due to failed operation on a connection 2020-08-28T23:03:12.204+0800 I REPL_HB [replexec-2] Error in heartbeat (requestId: 3587) to 192.168.126.141:27017, response status: HostUnreachable: Connection refused 另一些异常日志: 2020-08-28T23:03:12.207+0800 I REPL [replexec-9] Caught up to the latest optime known via heartbeats after becoming primary. Target optime: { ts: Timestamp(1598626976, 1), t: 1 }. My Last Applied: { ts: Timestamp(1598626976, 1), t: 1 } 2020-08-28T23:03:12.208+0800 I REPL [replexec-9] Exited primary catch-up mode. 2020-08-28T23:03:13.543+0800 I REPL [rsSync] transition to primary complete; database writes are now permitted |
节点三,也同时生成了一些异常日志:
1 2 3 4 5 6 7 8 9 |
其中一类异常日志: 2020-08-28T23:03:02.137+0800 I NETWORK [conn5] end connection 192.168.126.141:59924 (2 connections now open) 2020-08-28T23:03:03.525+0800 I ASIO [replexec-3] dropping unhealthy pooled connection to 192.168.126.141:27017 2020-08-28T23:03:03.525+0800 I ASIO [replexec-3] after drop, pool was empty, going to spawn some connections 2020-08-28T23:03:03.526+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to 192.168.126.141:27017 2020-08-28T23:03:03.526+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to 192.168.126.141:27017 - HostUnreachable: Connection refused 2020-08-28T23:03:03.527+0800 I ASIO [NetworkInterfaceASIO-Replication-0] Dropping all pooled connections to 192.168.126.141:27017 due to failed operation on a connection 2020-08-28T23:03:03.527+0800 I REPL_HB [replexec-2] Error in heartbeat (requestId: 2023) to 192.168.126.141:27017, response status: HostUnreachable: Connection refused |
再来看看节点二的状态:
节点二之前是SECONDARY,现在已经变成了PRIMARY:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
replme:SECONDARY> replme:PRIMARY> replme:PRIMARY> rs.status() { "set" : "replme", "date" : ISODate("2020-08-28T15:12:18.190Z"), "myState" : 1, "term" : NumberLong(2), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1598626976, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1598626976, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1598627533, 1), "t" : NumberLong(2) }, "durableOpTime" : { "ts" : Timestamp(1598627533, 1), "t" : NumberLong(2) } }, "members" : [ { "_id" : 0, "name" : "192.168.126.141:27017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2020-08-28T15:12:18.183Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T15:03:01.977Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "Connection refused", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : -1 }, { "_id" : 1, "name" : "192.168.126.142:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 1666, "optime" : { "ts" : Timestamp(1598627533, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2020-08-28T15:12:13Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1598626992, 1), "electionDate" : ISODate("2020-08-28T15:03:12Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 2, "name" : "192.168.126.143:27017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 1564, "lastHeartbeat" : ISODate("2020-08-28T15:12:17.448Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T15:12:16.682Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1598627533, 1), "$clusterTime" : { "clusterTime" : Timestamp(1598627533, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } replme:PRIMARY> |
可以看到,节点二,确实变成了PRIMARY。
然后,再次把节点一的MongoDB启动:
1 2 3 4 5 6 |
[root@tidb1 ~]# mongod -f /mongodb_data/conf/master.conf 2020-08-28T08:14:31.271-0700 I CONTROL [main] note: noprealloc may hurt performance in many applications about to fork child process, waiting until server is ready for connections. forked process: 62430 child process started successfully, parent exiting [root@tidb1 ~]# |
再分别看看节点一与节点二的情况:
节点一:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
[root@tidb1 ~]# mongo 192.168.126.141:27017 MongoDB shell version v3.6.20-rc0 connecting to: mongodb://192.168.126.141:27017/test?gssapiServiceName=mongodb Implicit session: session { "id" : UUID("5b6013a5-6a0b-498f-aed1-9b9106ff2979") } MongoDB server version: 3.6.20-rc0 Server has startup warnings: 2020-08-28T08:14:32.709-0700 I CONTROL [initandlisten] 2020-08-28T08:14:32.709-0700 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2020-08-28T08:14:32.709-0700 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2020-08-28T08:14:32.709-0700 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2020-08-28T08:14:32.709-0700 I CONTROL [initandlisten] 2020-08-28T08:14:32.710-0700 I CONTROL [initandlisten] 2020-08-28T08:14:32.710-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2020-08-28T08:14:32.710-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T08:14:32.710-0700 I CONTROL [initandlisten] 2020-08-28T08:14:32.710-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2020-08-28T08:14:32.710-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2020-08-28T08:14:32.711-0700 I CONTROL [initandlisten] 2020-08-28T08:14:32.711-0700 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 31126 processes, 1048576 files. Number of processes should be at least 524288 : 0.5 times number of files. 2020-08-28T08:14:32.711-0700 I CONTROL [initandlisten] 2020-08-28T08:14:32.749-0700 I REPL [replexec-0] 2020-08-28T08:14:32.749-0700 I REPL [replexec-0] ** WARNING: This replica set has a Primary-Secondary-Arbiter architecture, but readConcern:majority is enabled 2020-08-28T08:14:32.749-0700 I REPL [replexec-0] ** for this node. This is not a recommended configuration. Please see 2020-08-28T08:14:32.749-0700 I REPL [replexec-0] ** https://dochub.mongodb.org/core/psa-disable-rc-majority-3.6 2020-08-28T08:14:32.749-0700 I REPL [replexec-0] replme:PRIMARY> replme:PRIMARY> rs.status() { "set" : "replme", "date" : ISODate("2020-08-28T15:15:55.811Z"), "myState" : 1, "term" : NumberLong(3), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1598627755, 1), "t" : NumberLong(3) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1598627755, 1), "t" : NumberLong(3) }, "appliedOpTime" : { "ts" : Timestamp(1598627755, 1), "t" : NumberLong(3) }, "durableOpTime" : { "ts" : Timestamp(1598627755, 1), "t" : NumberLong(3) } }, "members" : [ { "_id" : 0, "name" : "192.168.126.141:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 84, "optime" : { "ts" : Timestamp(1598627755, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2020-08-28T15:15:55Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1598627684, 1), "electionDate" : ISODate("2020-08-28T15:14:44Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.126.142:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 83, "optime" : { "ts" : Timestamp(1598627745, 1), "t" : NumberLong(3) }, "optimeDurable" : { "ts" : Timestamp(1598627745, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2020-08-28T15:15:45Z"), "optimeDurableDate" : ISODate("2020-08-28T15:15:45Z"), "lastHeartbeat" : ISODate("2020-08-28T15:15:54.323Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T15:15:54.393Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.126.141:27017", "syncSourceHost" : "192.168.126.141:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.126.143:27017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 83, "lastHeartbeat" : ISODate("2020-08-28T15:15:54.322Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T15:15:55.783Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1598627755, 1), "$clusterTime" : { "clusterTime" : Timestamp(1598627755, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } replme:PRIMARY> |
节点二:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
replme:PRIMARY> 2020-08-28T23:15:29.866+0800 I NETWORK [thread1] trying reconnect to 192.168.126.142:27017 (192.168.126.142) failed 2020-08-28T23:15:29.867+0800 I NETWORK [thread1] reconnect 192.168.126.142:27017 (192.168.126.142) ok replme:SECONDARY> replme:SECONDARY> rs.status() { "set" : "replme", "date" : ISODate("2020-08-28T15:16:32.317Z"), "myState" : 2, "term" : NumberLong(3), "syncingTo" : "192.168.126.141:27017", "syncSourceHost" : "192.168.126.141:27017", "syncSourceId" : 0, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1598627795, 1), "t" : NumberLong(3) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1598627795, 1), "t" : NumberLong(3) }, "appliedOpTime" : { "ts" : Timestamp(1598627795, 1), "t" : NumberLong(3) }, "durableOpTime" : { "ts" : Timestamp(1598627795, 1), "t" : NumberLong(3) } }, "members" : [ { "_id" : 0, "name" : "192.168.126.141:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 121, "optime" : { "ts" : Timestamp(1598627785, 1), "t" : NumberLong(3) }, "optimeDurable" : { "ts" : Timestamp(1598627785, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2020-08-28T15:16:25Z"), "optimeDurableDate" : ISODate("2020-08-28T15:16:25Z"), "lastHeartbeat" : ISODate("2020-08-28T15:16:31.022Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T15:16:30.980Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1598627684, 1), "electionDate" : ISODate("2020-08-28T15:14:44Z"), "configVersion" : 1 }, { "_id" : 1, "name" : "192.168.126.142:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 1920, "optime" : { "ts" : Timestamp(1598627795, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2020-08-28T15:16:35Z"), "syncingTo" : "192.168.126.141:27017", "syncSourceHost" : "192.168.126.141:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 2, "name" : "192.168.126.143:27017", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 1819, "lastHeartbeat" : ISODate("2020-08-28T15:16:31.022Z"), "lastHeartbeatRecv" : ISODate("2020-08-28T15:16:31.284Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1598627795, 1), "$clusterTime" : { "clusterTime" : Timestamp(1598627795, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } replme:SECONDARY> |
至此,MongoDB最简单的集群:一主一副本一仲裁,就做完了。