如题所示,本文会详细展开部署PingCAP的TiDB的细节。
本文参考官方文档:
0. https://www.pingcap.com/docs-cn/
1. https://www.pingcap.com/docs-cn/op-guide/ansible-deployment/#准备机器
2. https://www.pingcap.com/docs-cn/op-guide/offline-ansible-deployment/

下面开始。
——————————————————————

TiDB的介绍在官方已经说明的很清楚了,这里不重复描述。
通过看官方文档,我画了如下脑图,帮助理解:

——————————————————————
在我的环境里,我的主机划分如下:

部署类型:单机多TiKV实例集群

一共:7台服务器

tidb-center:TiDB中心服务器

tidb-1:PD1 / TiDB1
tidb-2:PD2 / TiDB2
tidb-3: PD3
tidb-4: TiKV-1-1 / TiKV-2-1
tidb-5: TiKV-1-2 / TiKV-2-2
tidb-6: TiKV-1-3 / TiKV-2-3

以上所有主机的要求:
防火墙与SELINUX:关闭 / 开机禁用
IP:静态分配
Python:2.7
OS:CentOS / RHEL / OEL 7.3+
外网:Enable
架构:x86_64
文件:/etc/hosts – 配置好所有节点的记录(record)
YUM:配置国内YUM源:清华大学(mirrors.tuna.tsinghua.edu.cn)
YUM:缓存:Enable

内存:
PD:
最少8G内存

TiDB / TiKV:
最少16G内存

具体的配置方法在这个网站我以前的部署文档(Oracle、Cloudera)的里面均有描述,这里不重复赘述了。

文件:/etc/hosts

[root@tidb-center ~]# vi /etc/hosts
[root@tidb-center ~]# 
[root@tidb-center ~]# cat /etc/hosts
#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# Localhost
127.0.0.1       localhost

# Public

# Center
10.158.1.94     tidb-center

# TiDB Cluster
10.158.1.98     tidb-1
10.158.1.122    tidb-2
10.158.1.123    tidb-3
10.158.1.125    tidb-4
10.158.1.126    tidb-5
10.158.1.127    tidb-6

[root@tidb-center ~]# 
[root@tidb-center ~]# scp /etc/hosts tidb-1:/etc/
The authenticity of host 'tidb-1 (10.158.1.98)' can't be established.
ECDSA key fingerprint is SHA256:ntF/ykpdsrDjBE0VPUrFmSXkg9ttINeQOyZjkZRdMn0.
ECDSA key fingerprint is MD5:45:24:31:e2:71:18:05:ad:99:74:99:86:80:ae:9a:b5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tidb-1,10.158.1.98' (ECDSA) to the list of known hosts.
root@tidb-1's password: 
hosts                                                                                                                              100%  373   107.3KB/s   00:00    
[root@tidb-center ~]# scp /etc/hosts tidb-2:/etc/
The authenticity of host 'tidb-2 (10.158.1.122)' can't be established.
ECDSA key fingerprint is SHA256:Sqsdwlj7qVMT0d9hfYQw8M6aSXhEoQRWRlKqXxl1GCc.
ECDSA key fingerprint is MD5:82:f0:96:54:57:38:1c:98:52:31:d4:82:90:81:2e:8d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tidb-2,10.158.1.122' (ECDSA) to the list of known hosts.
root@tidb-2's password: 
hosts                                                                                                                              100%  373   100.7KB/s   00:00    
[root@tidb-center ~]# scp /etc/hosts tidb-3:/etc/
The authenticity of host 'tidb-3 (10.158.1.123)' can't be established.
ECDSA key fingerprint is SHA256:fVX1Zyh9OGfzvCdgwIMGeLB7xtJvn7JvUx/n+cv0dxg.
ECDSA key fingerprint is MD5:bd:e8:71:f9:c0:6a:d5:be:39:25:d6:16:a5:0d:be:98.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tidb-3,10.158.1.123' (ECDSA) to the list of known hosts.
root@tidb-3's password: 
hosts                                                                                                                              100%  373   181.7KB/s   00:00    
[root@tidb-center ~]# scp /etc/hosts tidb-4:/etc/
The authenticity of host 'tidb-4 (10.158.1.125)' can't be established.
ECDSA key fingerprint is SHA256:uqhdFP4Ww9IVWtPyJzfoALB0NqF3fepMnaDF0IwDjUM.
ECDSA key fingerprint is MD5:15:c1:00:03:c5:52:2d:72:e4:21:7f:00:ea:b0:51:43.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tidb-4,10.158.1.125' (ECDSA) to the list of known hosts.
root@tidb-4's password: 
hosts                                                                                                                              100%  373   212.4KB/s   00:00    
[root@tidb-center ~]# scp /etc/hosts tidb-5:/etc/
The authenticity of host 'tidb-5 (10.158.1.126)' can't be established.
ECDSA key fingerprint is SHA256:xy66gRULSGYYXqjVvtxAmX2jNjBw8gJhQDqlTxI5U2A.
ECDSA key fingerprint is MD5:41:af:6b:55:5c:e4:02:a9:b5:ac:d9:ba:c2:46:b4:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tidb-5,10.158.1.126' (ECDSA) to the list of known hosts.
root@tidb-5's password: 
hosts                                                                                                                              100%  373    57.5KB/s   00:00    
[root@tidb-center ~]# scp /etc/hosts tidb-6:/etc/
The authenticity of host 'tidb-6 (10.158.1.127)' can't be established.
ECDSA key fingerprint is SHA256:ROzk5sIl6qaxPPF10anTd43XVyfkxOFHdcS2uyD5DPk.
ECDSA key fingerprint is MD5:30:6b:a6:82:25:2c:29:f7:96:23:95:a6:12:de:41:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tidb-6,10.158.1.127' (ECDSA) to the list of known hosts.
root@tidb-6's password: 
hosts                                                                                                                              100%  373     1.1KB/s   00:00    
[root@tidb-center ~]# 
[root@tidb-center ~]# 

——————————————————————

1 ))))))))))))))) 准备环境

1.1 —- TiDB:中心服务器

安装依赖包:
官方文档安装的依赖包中包含了EPEL(epel-release)。
但是在我的环境里,EPEL已经选择了国内清华大学的YUM源,并写入了YUM配置,所以就不需要安装了。

[root@tidb-center ~]# yum repolist
Loaded plugins: langpacks, ulninfo
repo id                                         repo name                                                                                                      status
base/x86_64                                     CentOS-7 - Base                                                                                                10,019
epel/x86_64                                     Extra Packages for Enterprise Linux 7 - x86_64                                                                 12,901
extras/x86_64                                   CentOS-7 - Extras                                                                                                 371
ol7_UEKR5/x86_64                                Latest Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7Server (x86_64)                                  125
ol7_latest/x86_64                               Oracle Linux 7Server Latest (x86_64)                                                                           12,232
updates/x86_64                                  CentOS-7 - Updates                                                                                              1,103
repolist: 36,751
[root@tidb-center ~]# 
[root@tidb-center ~]# yum install -y git curl sshpass python-pip
Loaded plugins: langpacks, ulninfo
base                                                                                                                                          | 3.6 kB  00:00:00     
epel                                                                                                                                          | 4.7 kB  00:00:00     
extras                                                                                                                                        | 3.4 kB  00:00:00     
ol7_UEKR5                                                                                                                                     | 1.2 kB  00:00:00     
ol7_latest                                                                                                                                    | 1.4 kB  00:00:00     
updates                                                                                                                                       | 3.4 kB  00:00:00     
(1/7): base/x86_64/group_gz                                                                                                                   | 166 kB  00:00:01     
(2/7): epel/x86_64/group_gz                                                                                                                   |  88 kB  00:00:01     
(3/7): base/x86_64/primary_db                                                                                                                 | 6.0 MB  00:00:01     
(4/7): extras/x86_64/primary_db                                                                                                               | 180 kB  00:00:00     
(5/7): epel/x86_64/updateinfo                                                                                                                 | 958 kB  00:00:02     
(6/7): updates/x86_64/primary_db                                                                                                              | 2.4 MB  00:00:01     
epel/x86_64/primary_db         FAILED                                          
https://mirrors.tuna.tsinghua.edu.cn/epel/7/x86_64/repodata/230aa4abd115a74462e967b33cc86a36b2f418a3c2d9fae0872e7bc7baf3f8cf-primary.sqlite.bz2: [Errno 12] Timeout on https://mirrors.tuna.tsinghua.edu.cn/epel/7/x86_64/repodata/230aa4abd115a74462e967b33cc86a36b2f418a3c2d9fae0872e7bc7baf3f8cf-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
Trying other mirror.
(7/7): epel/x86_64/primary_db                                                                                                                 | 6.6 MB  00:00:00     
(1/5): ol7_UEKR5/x86_64/updateinfo                                                                                                            |  29 kB  00:00:01     
(2/5): ol7_UEKR5/x86_64/primary                                                                                                               | 2.4 MB  00:00:02     
(3/5): ol7_latest/x86_64/updateinfo                                                                                                           | 837 kB  00:00:02     
(4/5): ol7_latest/x86_64/group                                                                                                                | 810 kB  00:00:03     
(5/5): ol7_latest/x86_64/primary                                                                                                              |  12 MB  00:00:01     
ol7_UEKR5                                                                                                                                                    125/125
ol7_latest                                                                                                                                               12232/12232
Package curl-7.29.0-51.el7.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package git.x86_64 0:1.8.3.1-20.el7 will be installed
--> Processing Dependency: perl-Git = 1.8.3.1-20.el7 for package: git-1.8.3.1-20.el7.x86_64
--> Processing Dependency: perl(Error) for package: git-1.8.3.1-20.el7.x86_64
--> Processing Dependency: perl(Git) for package: git-1.8.3.1-20.el7.x86_64
--> Processing Dependency: perl(Term::ReadKey) for package: git-1.8.3.1-20.el7.x86_64
---> Package python2-pip.noarch 0:8.1.2-7.el7 will be installed
---> Package sshpass.x86_64 0:1.06-2.el7 will be installed
--> Running transaction check
---> Package perl-Error.noarch 1:0.17020-2.el7 will be installed
---> Package perl-Git.noarch 0:1.8.3.1-20.el7 will be installed
---> Package perl-TermReadKey.x86_64 0:2.30-20.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================================
 Package                                    Arch                             Version                                      Repository                            Size
=====================================================================================================================================================================
Installing:
 git                                        x86_64                           1.8.3.1-20.el7                               ol7_latest                           4.4 M
 python2-pip                                noarch                           8.1.2-7.el7                                  epel                                 1.7 M
 sshpass                                    x86_64                           1.06-2.el7                                   extras                                21 k
Installing for dependencies:
 perl-Error                                 noarch                           1:0.17020-2.el7                              base                                  32 k
 perl-Git                                   noarch                           1.8.3.1-20.el7                               ol7_latest                            55 k
 perl-TermReadKey                           x86_64                           2.30-20.el7                                  base                                  31 k

Transaction Summary
=====================================================================================================================================================================
Install  3 Packages (+3 Dependent packages)

Total download size: 6.2 M
Installed size: 30 M
Downloading packages:
(1/6): perl-Error-0.17020-2.el7.noarch.rpm                                                                                                    |  32 kB  00:00:01     
(2/6): sshpass-1.06-2.el7.x86_64.rpm                                                                                                          |  21 kB  00:00:00     
(3/6): python2-pip-8.1.2-7.el7.noarch.rpm                                                                                                     | 1.7 MB  00:00:00     
warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/perl-Git-1.8.3.1-20.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEYB  --:--:-- ETA 
Public key for perl-Git-1.8.3.1-20.el7.noarch.rpm is not installed
(4/6): perl-Git-1.8.3.1-20.el7.noarch.rpm                                                                                                     |  55 kB  00:00:01     
(5/6): perl-TermReadKey-2.30-20.el7.x86_64.rpm                                                                                                |  31 kB  00:00:01     
(6/6): git-1.8.3.1-20.el7.x86_64.rpm                                                                                                          | 4.4 MB  00:00:02     
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                2.3 MB/s | 6.2 MB  00:00:02     
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
 Userid     : "Oracle OSS group (Open Source Software group) "
 Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
 Package    : 7:oraclelinux-release-7.6-1.0.15.el7.x86_64 (@anaconda/7.6)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : 1:perl-Error-0.17020-2.el7.noarch                                                                                                                 1/6 
  Installing : perl-TermReadKey-2.30-20.el7.x86_64                                                                                                               2/6 
  Installing : perl-Git-1.8.3.1-20.el7.noarch                                                                                                                    3/6 
  Installing : git-1.8.3.1-20.el7.x86_64                                                                                                                         4/6 
  Installing : sshpass-1.06-2.el7.x86_64                                                                                                                         5/6 
  Installing : python2-pip-8.1.2-7.el7.noarch                                                                                                                    6/6 
  Verifying  : python2-pip-8.1.2-7.el7.noarch                                                                                                                    1/6 
  Verifying  : perl-TermReadKey-2.30-20.el7.x86_64                                                                                                               2/6 
  Verifying  : 1:perl-Error-0.17020-2.el7.noarch                                                                                                                 3/6 
  Verifying  : git-1.8.3.1-20.el7.x86_64                                                                                                                         4/6 
  Verifying  : perl-Git-1.8.3.1-20.el7.noarch                                                                                                                    5/6 
  Verifying  : sshpass-1.06-2.el7.x86_64                                                                                                                         6/6 

Installed:
  git.x86_64 0:1.8.3.1-20.el7                          python2-pip.noarch 0:8.1.2-7.el7                          sshpass.x86_64 0:1.06-2.el7                         

Dependency Installed:
  perl-Error.noarch 1:0.17020-2.el7                    perl-Git.noarch 0:1.8.3.1-20.el7                    perl-TermReadKey.x86_64 0:2.30-20.el7                   

Complete!
[root@tidb-center ~]# 

这样,就安装好了。

中心服务器:创建系统用户【tidb】,并生成SSH Key

创建用户:

[root@tidb-center ~]# id tidb
id: tidb: no such user
[root@tidb-center ~]# 
[root@tidb-center ~]# useradd -m -d /home/tidb tidb
[root@tidb-center ~]# 
[root@tidb-center ~]# id tidb
uid=1001(tidb) gid=1001(tidb) groups=1001(tidb)
[root@tidb-center ~]# 
[root@tidb-center ~]# ls -ltr /home/
total 0
drwx------. 3 adamhuan adamhuan 78 Mar  3 00:39 adamhuan
drwx------  3 tidb     tidb     78 Mar  3 11:46 tidb
[root@tidb-center ~]# 
[root@tidb-center ~]# ls -ltra /home/tidb/
total 12
-rw-r--r--  1 tidb tidb 231 Aug 24  2018 .bashrc
-rw-r--r--  1 tidb tidb 193 Aug 24  2018 .bash_profile
-rw-r--r--  1 tidb tidb  18 Aug 24  2018 .bash_logout
drwxr-xr-x  4 tidb tidb  39 Mar  3 00:39 .mozilla
drwxr-xr-x. 4 root root  34 Mar  3 11:46 ..
drwx------  3 tidb tidb  78 Mar  3 11:46 .
[root@tidb-center ~]# 
[root@tidb-center ~]# passwd tidb
Changing password for user tidb.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@tidb-center ~]# 

配置SUDOER

[root@tidb-center ~]# visudo
[root@tidb-center ~]# 
[root@tidb-center ~]# cat /etc/sudoers | grep tidb
tidb    ALL=(ALL)       NOPASSWD:ALL
[root@tidb-center ~]# 

创建SSH KEY

[root@tidb-center ~]# su - tidb
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ ls -ltra
total 12
-rw-r--r--  1 tidb tidb 231 Aug 24  2018 .bashrc
-rw-r--r--  1 tidb tidb 193 Aug 24  2018 .bash_profile
-rw-r--r--  1 tidb tidb  18 Aug 24  2018 .bash_logout
drwxr-xr-x  4 tidb tidb  39 Mar  3 00:39 .mozilla
drwxr-xr-x. 4 root root  34 Mar  3 11:46 ..
drwxrwxr-x  3 tidb tidb  18 Mar  3 11:50 .cache
drwx------  5 tidb tidb 107 Mar  3 11:50 .
drwxrwxr-x  3 tidb tidb  18 Mar  3 11:50 .config
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tidb/.ssh/id_rsa): 
Created directory '/home/tidb/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/tidb/.ssh/id_rsa.
Your public key has been saved in /home/tidb/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:rDkzknJUBzhSxeVj7hZnoJj9LKvLUroArVozgB9uiyk tidb@tidb-center
The key's randomart image is:
+---[RSA 2048]----+
|   ..+o..        |
|  . o .o         |
|   . .. *        |
|..   = * o       |
|+ o + o S o      |
|.= o.. * +       |
|o Oo+ B =        |
|E*oB . B         |
|= oo+o.          |
+----[SHA256]-----+
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ ls -ltra         
total 12
-rw-r--r--  1 tidb tidb 231 Aug 24  2018 .bashrc
-rw-r--r--  1 tidb tidb 193 Aug 24  2018 .bash_profile
-rw-r--r--  1 tidb tidb  18 Aug 24  2018 .bash_logout
drwxr-xr-x  4 tidb tidb  39 Mar  3 00:39 .mozilla
drwxr-xr-x. 4 root root  34 Mar  3 11:46 ..
drwxrwxr-x  3 tidb tidb  18 Mar  3 11:50 .cache
drwxrwxr-x  3 tidb tidb  18 Mar  3 11:50 .config
drwx------  6 tidb tidb 119 Mar  3 11:51 .
drwx------  2 tidb tidb  38 Mar  3 11:51 .ssh
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ ls -ltra .ssh/
total 8
drwx------ 6 tidb tidb  119 Mar  3 11:51 ..
-rw-r--r-- 1 tidb tidb  398 Mar  3 11:51 id_rsa.pub
-rw------- 1 tidb tidb 1679 Mar  3 11:51 id_rsa
drwx------ 2 tidb tidb   38 Mar  3 11:51 .
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ 

中心服务器,下载TiDB-Ansible

这里涉及到TiDB-Ansible的分支版本的选择:

release-2.0	2.0 版本	最新 2.0 稳定版本,可用于生产环境。
release-2.1	2.1 版本	最新 2.1 稳定版本,可用于生产环境(建议)。
master	        master 版本	包含最新特性,每日更新。

这里,我选择2.1的版本。

[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ whoami
tidb
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ pwd
/home/tidb
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ git --version
git version 1.8.3.1
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ git clone -b release-2.1 https://github.com/pingcap/tidb-ansible.git
Cloning into 'tidb-ansible'...
remote: Enumerating objects: 32, done.
remote: Counting objects: 100% (32/32), done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 8829 (delta 11), reused 13 (delta 5), pack-reused 8797
Receiving objects: 100% (8829/8829), 2.16 MiB | 721.00 KiB/s, done.
Resolving deltas: 100% (5330/5330), done.
[tidb@tidb-center ~]$ 

下载后,查看:

[tidb@tidb-center ~]$ ls -ltr
total 4
drwxrwxr-x 14 tidb tidb 4096 Mar  3 11:55 tidb-ansible
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ cd tidb-ansible/
[tidb@tidb-center tidb-ansible]$ ls -ltr
total 200
-rw-rw-r--  1 tidb tidb  1147 Mar  3 11:55 README.md
-rw-rw-r--  1 tidb tidb 11294 Mar  3 11:55 LICENSE
-rw-rw-r--  1 tidb tidb   214 Mar  3 11:55 clean_log_cron.yml
drwxrwxr-x  2 tidb tidb    21 Mar  3 11:55 callback_plugins
-rw-rw-r--  1 tidb tidb  1231 Mar  3 11:55 bootstrap.yml
-rw-rw-r--  1 tidb tidb  1142 Mar  3 11:55 ansible.cfg
drwxrwxr-x  3 tidb tidb    25 Mar  3 11:55 cloud
-rw-rw-r--  1 tidb tidb  2265 Mar  3 11:55 collect_diagnosis.yml
drwxrwxr-x  2 tidb tidb  4096 Mar  3 11:55 common_tasks
-rw-rw-r--  1 tidb tidb   516 Mar  3 11:55 create_users.yml
drwxrwxr-x  3 tidb tidb  4096 Mar  3 11:55 conf
-rw-rw-r--  1 tidb tidb 10346 Mar  3 11:55 graceful_stop.yml
drwxrwxr-x  2 tidb tidb    21 Mar  3 11:55 filter_plugins
-rw-rw-r--  1 tidb tidb  3222 Mar  3 11:55 deploy.yml
-rw-rw-r--  1 tidb tidb  2076 Mar  3 11:55 deploy_ntp.yml
-rw-rw-r--  1 tidb tidb  1238 Mar  3 11:55 deploy_drainer.yml
drwxrwxr-x  2 tidb tidb  4096 Mar  3 11:55 group_vars
drwxrwxr-x  2 tidb tidb    24 Mar  3 11:55 log
-rw-rw-r--  1 tidb tidb   116 Mar  3 11:55 local_prepare.yml
drwxrwxr-x  2 tidb tidb    69 Mar  3 11:55 library
-rw-rw-r--  1 tidb tidb  1923 Mar  3 11:55 inventory.ini
-rw-rw-r--  1 tidb tidb   161 Mar  3 11:55 hosts.ini
-rw-rw-r--  1 tidb tidb    52 Mar  3 11:55 requirements.txt
-rw-rw-r--  1 tidb tidb 11003 Mar  3 11:55 migrate_monitor.yml
-rw-rw-r--  1 tidb tidb 10540 Mar  3 11:55 rolling_update.yml
drwxrwxr-x 42 tidb tidb  4096 Mar  3 11:55 roles
-rw-rw-r--  1 tidb tidb 17333 Mar  3 11:55 rolling_update_monitor.yml
drwxrwxr-x  3 tidb tidb  4096 Mar  3 11:55 scripts
-rw-rw-r--  1 tidb tidb 15849 Mar  3 11:55 start.yml
-rw-rw-r--  1 tidb tidb  3633 Mar  3 11:55 unsafe_cleanup.yml
-rw-rw-r--  1 tidb tidb  2051 Mar  3 11:55 unsafe_cleanup_container.yml
drwxrwxr-x  2 tidb tidb    34 Mar  3 11:55 templates
-rw-rw-r--  1 tidb tidb  9297 Mar  3 11:55 stop.yml
-rw-rw-r--  1 tidb tidb   824 Mar  3 11:55 stop_spark.yml
-rw-rw-r--  1 tidb tidb  1543 Mar  3 11:55 stop_drainer.yml
-rw-rw-r--  1 tidb tidb   828 Mar  3 11:55 start_spark.yml
-rw-rw-r--  1 tidb tidb  1542 Mar  3 11:55 start_drainer.yml
-rw-rw-r--  1 tidb tidb  4184 Mar  3 11:55 unsafe_cleanup_data.yml
[tidb@tidb-center tidb-ansible]$ 

中心服务器,安装Ansible及其依赖:

[tidb@tidb-center tidb-ansible]$ pwd
/home/tidb/tidb-ansible
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ whoami
tidb
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ls -ltr | grep requirements
-rw-rw-r--  1 tidb tidb    52 Mar  3 11:55 requirements.txt
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ cat requirements.txt | wc -l
3
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ cat requirements.txt
ansible>=2.4.2,<2.7.0
jinja2>=2.9.6
jmespath>=0.9.0
[tidb@tidb-center tidb-ansible]$ 

[tidb@tidb-center tidb-ansible]$ pip --version
pip 8.1.2 from /usr/lib/python2.7/site-packages (python 2.7)
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ sudo pip install -r ./requirements.txt 
Collecting ansible<2.7.0,>=2.4.2 (from -r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/ab/66/08d034825b8f77aa3b1d678c06358acf97373aac9aa186724c989fe51a42/ansible-2.6.14.tar.gz (10.7MB)
    0% |▏                               | 40kB 122kB/s eta 0:01:28

(等待运行完成。)

运行完成后,完整的命令记录如下:

[tidb@tidb-center tidb-ansible]$ sudo pip install -r ./requirements.txt 
Collecting ansible<2.7.0,>=2.4.2 (from -r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/ab/66/08d034825b8f77aa3b1d678c06358acf97373aac9aa186724c989fe51a42/ansible-2.6.14.tar.gz (10.7MB)
    100% |████████████████████████████████| 10.7MB 56kB/s 
Collecting jinja2>=2.9.6 (from -r ./requirements.txt (line 2))
  Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
    100% |████████████████████████████████| 133kB 96kB/s 
Collecting jmespath>=0.9.0 (from -r ./requirements.txt (line 3))
  Downloading https://files.pythonhosted.org/packages/83/94/7179c3832a6d45b266ddb2aac329e101367fbdb11f425f13771d27f225bb/jmespath-0.9.4-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): PyYAML in /usr/lib64/python2.7/site-packages (from ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
Collecting paramiko (from ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/cf/ae/94e70d49044ccc234bfdba20114fa947d7ba6eb68a2e452d89b920e62227/paramiko-2.4.2-py2.py3-none-any.whl (193kB)
    100% |████████████████████████████████| 194kB 117kB/s 
Collecting cryptography (from ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/c3/c1/cf8665c955c9393e9ff0872ba6cd3dc6f46ef915e94afcf6e0410508ca69/cryptography-2.6.1-cp27-cp27mu-manylinux1_x86_64.whl (2.3MB)
    100% |████████████████████████████████| 2.3MB 91kB/s 
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/site-packages (from ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
Collecting MarkupSafe>=0.23 (from jinja2>=2.9.6->-r ./requirements.txt (line 2))
  Downloading https://files.pythonhosted.org/packages/fb/40/f3adb7cf24a8012813c5edb20329eb22d5d8e2a0ecf73d21d6b85865da11/MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_x86_64.whl
Collecting pynacl>=1.0.1 (from paramiko->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/b3/25/e605574f24948a8a53b497744e93f061eb1dbe7c44b6465fc1c172d591aa/PyNaCl-1.3.0-cp27-cp27mu-manylinux1_x86_64.whl (762kB)
    100% |████████████████████████████████| 768kB 209kB/s 
Collecting pyasn1>=0.1.7 (from paramiko->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/7b/7c/c9386b82a25115cccf1903441bba3cbadcfae7b678a20167347fa8ded34c/pyasn1-0.4.5-py2.py3-none-any.whl (73kB)
    100% |████████████████████████████████| 81kB 197kB/s 
Collecting bcrypt>=3.1.3 (from paramiko->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/50/d8/95f7cb04344033bf9d1a12c5a7969a15999b6a710fbe1969c517333d9a62/bcrypt-3.1.6-cp27-cp27mu-manylinux1_x86_64.whl (59kB)
    100% |████████████████████████████████| 61kB 130kB/s 
Collecting enum34; python_version < "3" (from cryptography->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/c5/db/e56e6b4bbac7c4a06de1c50de6fe1ef3810018ae11732a50f15f62c7d050/enum34-1.1.6-py2-none-any.whl
Collecting asn1crypto>=0.21.0 (from cryptography->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl (101kB)
    100% |████████████████████████████████| 102kB 155kB/s 
Collecting cffi!=1.11.3,>=1.8 (from cryptography->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/9d/6f/aea9f5559fb593da07ff34e67513bd62483b45715b4a5f5fae6a0a5792ea/cffi-1.12.2-cp27-cp27mu-manylinux1_x86_64.whl (413kB)
    100% |████████████████████████████████| 419kB 92kB/s 
Requirement already satisfied (use --upgrade to upgrade): six>=1.4.1 in /usr/lib/python2.7/site-packages (from cryptography->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
Requirement already satisfied (use --upgrade to upgrade): ipaddress; python_version < "3" in /usr/lib/python2.7/site-packages (from cryptography->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
Collecting pycparser (from cffi!=1.11.3,>=1.8->cryptography->ansible<2.7.0,>=2.4.2->-r ./requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB)
    100% |████████████████████████████████| 163kB 186kB/s 
Installing collected packages: MarkupSafe, jinja2, enum34, asn1crypto, pycparser, cffi, cryptography, pynacl, pyasn1, bcrypt, paramiko, ansible, jmespath
  Running setup.py install for pycparser ... done
  Running setup.py install for ansible ... done
Successfully installed MarkupSafe-1.1.1 ansible-2.6.14 asn1crypto-0.24.0 bcrypt-3.1.6 cffi-1.12.2 cryptography-2.6.1 enum34-1.1.6 jinja2-2.10 jmespath-0.9.4 paramiko-2.4.2 pyasn1-0.4.5 pycparser-2.19 pynacl-1.3.0
You are using pip version 8.1.2, however version 19.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[tidb@tidb-center tidb-ansible]$ 

安装完成后,查看ansible的状态:

[tidb@tidb-center tidb-ansible]$ ansible --version
ansible 2.6.14
  config file = /home/tidb/tidb-ansible/ansible.cfg
  configured module search path = [u'/home/tidb/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Nov  1 2018, 03:12:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36.0.1)]
[tidb@tidb-center tidb-ansible]$ 

中心服务器:配置Ansible

在目标服务器上自动:
1. 创建tidb用户
2. 配置SUDO权限
3. 配置目标服务器与中心服务器的互信

文件:/home/tidb/tidb-ansible/hosts.ini

[tidb@tidb-center tidb-ansible]$ pwd
/home/tidb/tidb-ansible
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ls -ltr | grep hosts
-rw-rw-r--  1 tidb tidb   161 Mar  3 11:55 hosts.ini
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ vi hosts.ini 
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ cat hosts.ini 
[servers]
10.158.1.98
10.158.1.122
10.158.1.123
10.158.1.125
10.158.1.126
10.158.1.127

[all:vars]
username = tidb
ntp_server = pool.ntp.org
[tidb@tidb-center tidb-ansible]$ 

根据刚刚配置的配置文件,执行:

[tidb@tidb-center tidb-ansible]$ ansible-playbook -i hosts.ini create_users.yml -u root -k
SSH password: 

PLAY [all] **********************************************************************************************************************************************************

TASK [create user] **************************************************************************************************************************************************
changed: [10.158.1.126]
changed: [10.158.1.123]
changed: [10.158.1.125]
changed: [10.158.1.122]
changed: [10.158.1.98]
changed: [10.158.1.127]

TASK [set authorized key] *******************************************************************************************************************************************
changed: [10.158.1.126]
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.125]
changed: [10.158.1.123]
changed: [10.158.1.127]

TASK [update sudoers file] ******************************************************************************************************************************************
changed: [10.158.1.126]
changed: [10.158.1.122]
changed: [10.158.1.123]
changed: [10.158.1.125]
changed: [10.158.1.98]
changed: [10.158.1.127]

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=3    changed=3    unreachable=0    failed=0   
10.158.1.123               : ok=3    changed=3    unreachable=0    failed=0   
10.158.1.125               : ok=3    changed=3    unreachable=0    failed=0   
10.158.1.126               : ok=3    changed=3    unreachable=0    failed=0   
10.158.1.127               : ok=3    changed=3    unreachable=0    failed=0   
10.158.1.98                : ok=3    changed=3    unreachable=0    failed=0   

Congrats! All goes well. :-)
[tidb@tidb-center tidb-ansible]$ 

验证一下:

[tidb@tidb-center tidb-ansible]$ ssh tidb-1 date
Sun Mar  3 12:14:27 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-2 date
Sun Mar  3 12:14:29 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-3 date
Sun Mar  3 12:14:33 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-4 date
Sun Mar  3 12:14:36 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-5 date
Sun Mar  3 12:14:39 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-6 date
Sun Mar  3 12:14:42 CST 2019
[tidb@tidb-center tidb-ansible]$ 

没问题,可以看到远程SSH均无需口令了,SSH互信建立成功。

通过Ansible在目标服务器上安装NTP服务。

[tidb@tidb-center tidb-ansible]$ pwd
/home/tidb/tidb-ansible
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b

PLAY [all] **********************************************************************************************************************************************************

TASK [get facts] ****************************************************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.98]
ok: [10.158.1.127]

TASK [RedHat family Linux distribution - make sure ntp, ntpstat have been installed] ********************************************************************************
changed: [10.158.1.125] => (item=[u'ntp'])
changed: [10.158.1.123] => (item=[u'ntp'])
changed: [10.158.1.126] => (item=[u'ntp'])
changed: [10.158.1.122] => (item=[u'ntp'])
changed: [10.158.1.98] => (item=[u'ntp'])
changed: [10.158.1.127] => (item=[u'ntp'])

TASK [RedHat family Linux distribution - make sure ntpdate have been installed] *************************************************************************************
ok: [10.158.1.98] => (item=[u'ntpdate'])
ok: [10.158.1.122] => (item=[u'ntpdate'])
ok: [10.158.1.123] => (item=[u'ntpdate'])
ok: [10.158.1.125] => (item=[u'ntpdate'])
ok: [10.158.1.126] => (item=[u'ntpdate'])
ok: [10.158.1.127] => (item=[u'ntpdate'])

TASK [Debian family Linux distribution - make sure ntp, ntpstat have been installed] ********************************************************************************

TASK [Debian family Linux distribution - make sure ntpdate have been installed] *************************************************************************************

TASK [RedHat family Linux distribution - make sure ntpd service has been stopped] ***********************************************************************************
ok: [10.158.1.126]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.98]
ok: [10.158.1.125]
ok: [10.158.1.127]

TASK [Debian family Linux distribution - make sure ntp service has been stopped] ************************************************************************************

TASK [Adjust Time | start to adjust time with pool.ntp.org] *********************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]
changed: [10.158.1.126]
fatal: [10.158.1.123]: FAILED! => {"changed": true, "cmd": "ntpdate pool.ntp.org", "delta": "0:00:08.362920", "end": "2019-03-03 12:19:59.987912", "msg": "non-zero return code", "rc": 1, "start": "2019-03-03 12:19:51.624992", "stderr": " 3 Mar 12:19:59 ntpdate[18688]: no server suitable for synchronization found", "stderr_lines": [" 3 Mar 12:19:59 ntpdate[18688]: no server suitable for synchronization found"], "stdout": "", "stdout_lines": []}
fatal: [10.158.1.125]: FAILED! => {"changed": true, "cmd": "ntpdate pool.ntp.org", "delta": "0:00:08.363167", "end": "2019-03-03 12:20:00.515097", "msg": "non-zero return code", "rc": 1, "start": "2019-03-03 12:19:52.151930", "stderr": " 3 Mar 12:20:00 ntpdate[18857]: no server suitable for synchronization found", "stderr_lines": [" 3 Mar 12:20:00 ntpdate[18857]: no server suitable for synchronization found"], "stdout": "", "stdout_lines": []}
changed: [10.158.1.127]

TASK [RedHat family Linux distribution - make sure ntpd service has been started] ***********************************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.126]
changed: [10.158.1.127]

TASK [Debian family Linux distribution - Make sure ntp service has been started] ************************************************************************************
        to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/deploy_ntp.retry

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=6    changed=3    unreachable=0    failed=0   
10.158.1.123               : ok=4    changed=1    unreachable=0    failed=1   
10.158.1.125               : ok=4    changed=1    unreachable=0    failed=1   
10.158.1.126               : ok=6    changed=3    unreachable=0    failed=0   
10.158.1.127               : ok=6    changed=3    unreachable=0    failed=0   
10.158.1.98                : ok=6    changed=3    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[10.158.1.123]: Ansible FAILED! => playbook: deploy_ntp.yml; TASK: Adjust Time | start to adjust time with pool.ntp.org; message: {"changed": true, "cmd": "ntpdate pool.ntp.org", "delta": "0:00:08.362920", "end": "2019-03-03 12:19:59.987912", "msg": "non-zero return code", "rc": 1, "start": "2019-03-03 12:19:51.624992", "stderr": " 3 Mar 12:19:59 ntpdate[18688]: no server suitable for synchronization found", "stderr_lines": [" 3 Mar 12:19:59 ntpdate[18688]: no server suitable for synchronization found"], "stdout": "", "stdout_lines": []}

[10.158.1.125]: Ansible FAILED! => playbook: deploy_ntp.yml; TASK: Adjust Time | start to adjust time with pool.ntp.org; message: {"changed": true, "cmd": "ntpdate pool.ntp.org", "delta": "0:00:08.363167", "end": "2019-03-03 12:20:00.515097", "msg": "non-zero return code", "rc": 1, "start": "2019-03-03 12:19:52.151930", "stderr": " 3 Mar 12:20:00 ntpdate[18857]: no server suitable for synchronization found", "stderr_lines": [" 3 Mar 12:20:00 ntpdate[18857]: no server suitable for synchronization found"], "stdout": "", "stdout_lines": []}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

可以看到,这里似乎遇到了一些问题。

这个问题是因为NTP指向的服务器无法访问或者没有NTP服务导致的。

解决方法:
中心服务器配置NTP服务:

[root@tidb-center ~]# cat /etc/ntp.conf | grep -v "^#"

driftfile /var/lib/ntp/drift

restrict default nomodify notrap nopeer noquery

restrict 127.0.0.1 
restrict ::1

restrict 10.158.1.94 nomodify notrap nopeer noquery
restrict 10.158.0.1 mask 255.255.255.0 nomodify notrap


server 0.rhel.pool.ntp.org iburst
server 1.rhel.pool.ntp.org iburst
server 2.rhel.pool.ntp.org iburst
server 3.rhel.pool.ntp.org iburst

server 127.127.1.0
Fudge 127.127.1.0 stratum 10



includefile /etc/ntp/crypto/pw

keys /etc/ntp/keys





disable monitor
[root@tidb-center ~]# 
[root@tidb-center ~]# service ntpd status
Redirecting to /bin/systemctl status ntpd.service
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[root@tidb-center ~]# 
[root@tidb-center ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@tidb-center ~]# 
[root@tidb-center ~]# systemctl start ntpd
[root@tidb-center ~]# 
[root@tidb-center ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-03-03 12:34:05 CST; 7s ago
  Process: 8289 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 8291 (ntpd)
    Tasks: 1
   CGroup: /system.slice/ntpd.service
           └─8291 /usr/sbin/ntpd -u ntp:ntp -g

Mar 03 12:34:05 tidb-center ntpd[8291]: Listen normally on 2 lo 127.0.0.1 UDP 123
Mar 03 12:34:05 tidb-center ntpd[8291]: Listen normally on 3 ens192 10.158.1.94 UDP 123
Mar 03 12:34:05 tidb-center ntpd[8291]: Listen normally on 4 virbr0 192.168.122.1 UDP 123
Mar 03 12:34:05 tidb-center ntpd[8291]: Listen normally on 5 lo ::1 UDP 123
Mar 03 12:34:05 tidb-center ntpd[8291]: Listen normally on 6 ens192 fe80::f856:2592:3d4e:3398 UDP 123
Mar 03 12:34:05 tidb-center ntpd[8291]: Listening on routing socket on fd #23 for interface updates
Mar 03 12:34:05 tidb-center ntpd[8291]: 0.0.0.0 c016 06 restart
Mar 03 12:34:05 tidb-center ntpd[8291]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
Mar 03 12:34:05 tidb-center ntpd[8291]: 0.0.0.0 c011 01 freq_not_set
Mar 03 12:34:09 tidb-center ntpd[8291]: 0.0.0.0 c514 04 freq_mode
[root@tidb-center ~]# 

然后,再试一次:

[root@tidb-center ~]# su - tidb 
Last login: Sun Mar  3 12:35:06 CST 2019 on pts/0
[tidb@tidb-center ~]$ cd tidb-ansible/
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ cat hosts.ini 
[servers]
10.158.1.98
10.158.1.122
10.158.1.123
10.158.1.125
10.158.1.126
10.158.1.127

[all:vars]
username = tidb
ntp_server = tidb-center
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b

PLAY [all] **********************************************************************************************************************************************************

TASK [get facts] ****************************************************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.126]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.98]
ok: [10.158.1.127]

TASK [RedHat family Linux distribution - make sure ntp, ntpstat have been installed] ********************************************************************************
ok: [10.158.1.126] => (item=[u'ntp'])
ok: [10.158.1.125] => (item=[u'ntp'])
ok: [10.158.1.122] => (item=[u'ntp'])
ok: [10.158.1.123] => (item=[u'ntp'])
ok: [10.158.1.98] => (item=[u'ntp'])
ok: [10.158.1.127] => (item=[u'ntp'])

TASK [RedHat family Linux distribution - make sure ntpdate have been installed] *************************************************************************************
ok: [10.158.1.98] => (item=[u'ntpdate'])
ok: [10.158.1.122] => (item=[u'ntpdate'])
ok: [10.158.1.123] => (item=[u'ntpdate'])
ok: [10.158.1.126] => (item=[u'ntpdate'])
ok: [10.158.1.125] => (item=[u'ntpdate'])
ok: [10.158.1.127] => (item=[u'ntpdate'])

TASK [Debian family Linux distribution - make sure ntp, ntpstat have been installed] ********************************************************************************

TASK [Debian family Linux distribution - make sure ntpdate have been installed] *************************************************************************************

TASK [RedHat family Linux distribution - make sure ntpd service has been stopped] ***********************************************************************************
changed: [10.158.1.126]
ok: [10.158.1.125]
changed: [10.158.1.98]
changed: [10.158.1.122]
ok: [10.158.1.123]
changed: [10.158.1.127]

TASK [Debian family Linux distribution - make sure ntp service has been stopped] ************************************************************************************

TASK [Adjust Time | start to adjust time with tidb-center] **********************************************************************************************************
changed: [10.158.1.123]
changed: [10.158.1.122]
changed: [10.158.1.126]
changed: [10.158.1.125]
changed: [10.158.1.98]
changed: [10.158.1.127]

TASK [RedHat family Linux distribution - make sure ntpd service has been started] ***********************************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.123]
changed: [10.158.1.126]
changed: [10.158.1.125]
changed: [10.158.1.127]

TASK [Debian family Linux distribution - Make sure ntp service has been started] ************************************************************************************

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=6    changed=3    unreachable=0    failed=0   
10.158.1.123               : ok=6    changed=2    unreachable=0    failed=0   
10.158.1.125               : ok=6    changed=2    unreachable=0    failed=0   
10.158.1.126               : ok=6    changed=3    unreachable=0    failed=0   
10.158.1.127               : ok=6    changed=3    unreachable=0    failed=0   
10.158.1.98                : ok=6    changed=3    unreachable=0    failed=0   

Congrats! All goes well. :-)
[tidb@tidb-center tidb-ansible]$ 

另注:
因为,NTP的作用是保证TiDB的集群中的服务器的系统时钟一致。
而我当前的环境,均是在VMware vCENTER上构建的,因此我可以通过VMware ToolBox解决该问题。

启用时间同步:

[root@tidb-center ~]# vmware-toolbox-cmd timesync status
Disabled
[root@tidb-center ~]# 
[root@tidb-center ~]# vmware-toolbox-cmd timesync enable
Enabled
[root@tidb-center ~]# 

目标服务器(六个节点服务器)配置CPUfreq调节器模式
为了让 CPU 发挥最大性能,请将 CPUfreq 调节器模式设置为 performance 模式。

关于CPUfreq的信息可以查看红帽的官方文档:
https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/power_management_guide/cpufreq_governors#cpufreq_setup

查看系统支持的调节器模式:

[root@tidb-center ~]# cpupower frequency-info --governors
analyzing CPU 0:
  available cpufreq governors: Not Available
[root@tidb-center ~]# 

当前我的系统是不支持调节器模式的。
根据官方文档,可以跳过。

目标服务器上,增加数据盘(100G),并以ext4文件系统挂载

在官方文档,这部分是1000G的盘,但是我的实验环境有限,就用100G。

查看新增的磁盘:

[root@tidb-1 ~]# fdisk -l | grep "Disk"
Disk /dev/sda: 64.4 GB, 64424509440 bytes, 125829120 sectors
Disk label type: dos
Disk identifier: 0x0009203e
Disk /dev/mapper/ol_tidb--1-root: 61.2 GB, 61199089664 bytes, 119529472 sectors
Disk /dev/mapper/ol_tidb--1-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
[root@tidb-1 ~]# 
[root@tidb-1 ~]# ls -ltr --time-style=long-iso /dev/sd*
brw-rw---- 1 root disk 8,  0 2019-03-03 11:32 /dev/sda
brw-rw---- 1 root disk 8,  2 2019-03-03 11:32 /dev/sda2
brw-rw---- 1 root disk 8,  1 2019-03-03 11:32 /dev/sda1
brw-rw---- 1 root disk 8, 16 2019-03-03 12:50 /dev/sdb
[root@tidb-1 ~]# 
[root@tidb-1 ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@tidb-1 ~]# 

可以看到,当前没有分区

执行分区,并查看:

[root@tidb-1 ~]# parted -s -a optimal /dev/sdb mklabel gpt -- mkpart primary ext4 1 -1
[root@tidb-1 ~]# 
[root@tidb-1 ~]# fdisk -l /dev/sdb
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: C55C9FD0-F2A7-4E90-977D-447C1310886A


#         Start          End    Size  Type            Name
 1         2048    209713151    100G  Microsoft basic primary
[root@tidb-1 ~]# 

格式化文件系统:ext4

[root@tidb-1 ~]# mkfs.ext4 /dev/sdb
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2174746624
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@tidb-1 ~]# 

查看磁盘的UUID信息:

[root@tidb-1 ~]# lsblk -f
NAME                FSTYPE      LABEL UUID                                   MOUNTPOINT
sdb                 ext4              164ccb57-a1a2-4c48-bdbc-e9cac1f9e254   
sr0                                                                          
fd0                                                                          
sda                                                                          
├─sda2              LVM2_member       VFIbkT-3Vwi-D6Z7-GRKH-hhix-l0XZ-23mdBA 
│ ├─ol_tidb--1-swap swap              d9a45728-b23b-415b-8e69-4c653ca1496a   [SWAP]
│ └─ol_tidb--1-root xfs               c4ef80aa-c386-4cc7-bdfe-953aca40a7a6   /
└─sda1              xfs               2c4069b3-5fcf-4e61-8c1e-3cec20e331db   /boot
[root@tidb-1 ~]# 

创建挂载点:

[root@tidb-1 ~]# mkdir /tidb_data_1
[root@tidb-1 ~]# 

写入fstab。

文件:/etc/fstab

[root@tidb-1 ~]# vi /etc/fstab 
[root@tidb-1 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Sun Mar  3 00:42:16 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol_tidb--1-root /                       xfs     defaults        0 0
UUID=2c4069b3-5fcf-4e61-8c1e-3cec20e331db /boot                   xfs     defaults        0 0
/dev/mapper/ol_tidb--1-swap swap                    swap    defaults        0 0

UUID=164ccb57-a1a2-4c48-bdbc-e9cac1f9e254       /tidb_data_1    ext4    defaults,nodelalloc,noatime 0 2
[root@tidb-1 ~]# 

挂载:

[root@tidb-1 ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     2.8G     0  2.8G   0% /dev
tmpfs                        2.8G     0  2.8G   0% /dev/shm
tmpfs                        2.8G  9.3M  2.8G   1% /run
tmpfs                        2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/ol_tidb--1-root   57G  4.0G   53G   7% /
/dev/sda1                   1014M  225M  790M  23% /boot
tmpfs                        570M   12K  570M   1% /run/user/42
tmpfs                        570M     0  570M   0% /run/user/0
[root@tidb-1 ~]# 
[root@tidb-1 ~]# mount -a
[root@tidb-1 ~]# 
[root@tidb-1 ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     2.8G     0  2.8G   0% /dev
tmpfs                        2.8G     0  2.8G   0% /dev/shm
tmpfs                        2.8G  9.3M  2.8G   1% /run
tmpfs                        2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/ol_tidb--1-root   57G  4.0G   53G   7% /
/dev/sda1                   1014M  225M  790M  23% /boot
tmpfs                        570M   12K  570M   1% /run/user/42
tmpfs                        570M     0  570M   0% /run/user/0
/dev/sdb                      99G   61M   94G   1% /tidb_data_1
[root@tidb-1 ~]# 
[root@tidb-1 ~]# mount -t ext4
/dev/sdb on /tidb_data_1 type ext4 (rw,noatime,nodelalloc,data=ordered)
[root@tidb-1 ~]# 
[root@tidb-1 ~]# lsblk -f
NAME                FSTYPE      LABEL UUID                                   MOUNTPOINT
sdb                 ext4              164ccb57-a1a2-4c48-bdbc-e9cac1f9e254   /tidb_data_1
sr0                                                                          
fd0                                                                          
sda                                                                          
├─sda2              LVM2_member       VFIbkT-3Vwi-D6Z7-GRKH-hhix-l0XZ-23mdBA 
│ ├─ol_tidb--1-swap swap              d9a45728-b23b-415b-8e69-4c653ca1496a   [SWAP]
│ └─ol_tidb--1-root xfs               c4ef80aa-c386-4cc7-bdfe-953aca40a7a6   /
└─sda1              xfs               2c4069b3-5fcf-4e61-8c1e-3cec20e331db   /boot
[root@tidb-1 ~]#

分配集群节点资源

文件:tidb-ansible/inventory.ini

[root@tidb-center ~]# su - tidb 
Last login: Sun Mar  3 12:39:07 CST 2019 on pts/0
[tidb@tidb-center ~]$ cd tidb-ansible/
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ls -ltr inventory.ini 
-rw-rw-r-- 1 tidb tidb 1923 Mar  3 11:55 inventory.ini
[tidb@tidb-center tidb-ansible]$ 

[tidb@tidb-center tidb-ansible]$ vi inventory.ini 
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ cat inventory.ini   
## TiDB Cluster Part
[tidb_servers]
#192.168.0.2

10.158.1.98
10.158.1.122

[tikv_servers]
#192.168.0.3
#192.168.0.4
#192.168.0.5

TiKV-1-1 ansible_host=10.158.1.125 deploy_dir=/tidb_data_1/data1/deploy tikv_port=20171 labels="host=tikv1"
TiKV-2-1 ansible_host=10.158.1.125 deploy_dir=/tidb_data_1/data2/deploy tikv_port=20172 labels="host=tikv1"

TiKV-1-2 ansible_host=10.158.1.126 deploy_dir=/tidb_data_1/data1/deploy tikv_port=20171 labels="host=tikv2"
TiKV-2-2 ansible_host=10.158.1.126 deploy_dir=/tidb_data_1/data2/deploy tikv_port=20172 labels="host=tikv2"

TiKV-1-3 ansible_host=10.158.1.127 deploy_dir=/tidb_data_1/data1/deploy tikv_port=20171 labels="host=tikv3"
TiKV-2-3 ansible_host=10.158.1.127 deploy_dir=/tidb_data_1/data2/deploy tikv_port=20172 labels="host=tikv3"

[pd_servers]
#192.168.0.6
#192.168.0.7
#192.168.0.8

10.158.1.98
10.158.1.122
10.158.1.123

[spark_master]

[spark_slaves]

[lightning_server]

[importer_server]

## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
#192.168.0.10

10.158.1.98

[grafana_servers]
#192.168.0.10

10.158.1.98

# node_exporter and blackbox_exporter servers
[monitored_servers]
#192.168.0.2
#192.168.0.3
#192.168.0.4
#192.168.0.5
#192.168.0.6
#192.168.0.7
#192.168.0.8
#192.168.0.10

10.158.1.98
10.158.1.122
10.158.1.123
10.158.1.125
10.158.1.126
10.158.1.127

[alertmanager_servers]
#192.168.0.10

[kafka_exporter_servers]

## Binlog Part
[pump_servers]

[drainer_servers]

## Group variables
[pd_servers:vars]
# location_labels = ["zone","rack","host"]

location_labels = ["host"]

## Global variables
[all:vars]
#deploy_dir = /home/tidb/deploy

## Connection
# ssh via normal user
ansible_user = tidb

cluster_name = adamhuan-cluster

tidb_version = v2.1.5

# process supervision, [systemd, supervise]
process_supervision = systemd

timezone = Asia/Shanghai

enable_firewalld = False
# check NTP service
enable_ntpd = True
set_hostname = False

## binlog trigger
enable_binlog = False

# kafka cluster address for monitoring, example:
# kafka_addrs = "192.168.0.11:9092,192.168.0.12:9092,192.168.0.13:9092"
kafka_addrs = ""

# zookeeper address of kafka cluster for monitoring, example:
# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181"
zookeeper_addrs = ""

# store slow query log into seperate file
enable_slow_query_log = False

# enable TLS authentication in the TiDB cluster
enable_tls = False

# KV mode
deploy_without_tidb = False

# Optional: Set if you already have a alertmanager server.
# Format: alertmanager_host:alertmanager_port
alertmanager_target = ""

grafana_admin_user = "admin"
grafana_admin_password = "admin"


### Collect diagnosis
collect_log_recent_hours = 2

enable_bandwidth_limit = True
# default: 10Mb/s, unit: Kbit/s
collect_bandwidth_limit = 10000
[tidb@tidb-center tidb-ansible]$ 

查看其它的参数调整:

[tidb@tidb-center ~]$ pwd
/home/tidb
[tidb@tidb-center ~]$ 
[tidb@tidb-center ~]$ cd tidb-ansible/conf/
[tidb@tidb-center conf]$ 
[tidb@tidb-center conf]$ ls -ltr
total 68
drwxrwxr-x 2 tidb tidb    47 Mar  3 11:55 ssl
-rw-rw-r-- 1 tidb tidb  4370 Mar  3 11:55 spark-env.yml
-rw-rw-r-- 1 tidb tidb  3219 Mar  3 11:55 spark-defaults.yml
-rw-rw-r-- 1 tidb tidb   728 Mar  3 11:55 pump.yml
-rw-rw-r-- 1 tidb tidb  1577 Mar  3 11:55 pd.yml
-rw-rw-r-- 1 tidb tidb   858 Mar  3 11:55 grafana_collector.toml
-rw-rw-r-- 1 tidb tidb  1697 Mar  3 11:55 drainer.toml
-rw-rw-r-- 1 tidb tidb  1963 Mar  3 11:55 alertmanager.yml
-rw-rw-r-- 1 tidb tidb 19646 Mar  3 11:55 tikv.yml
-rw-rw-r-- 1 tidb tidb  1851 Mar  3 11:55 tikv-importer.yml
-rw-rw-r-- 1 tidb tidb  7184 Mar  3 11:55 tidb.yml
-rw-rw-r-- 1 tidb tidb  4092 Mar  3 11:55 tidb-lightning.yml
[tidb@tidb-center conf]$ 
[tidb@tidb-center conf]$ cat tikv.yml | grep -v "#" | strings
global:
readpool:
  storage:
  coprocessor:
server:
storage:
metric:
raftstore:
  capacity: 100GB
coprocessor:
rocksdb:
  defaultcf:
    block-cache-size: "6GB"
  writecf:
    block-cache-size: "4GB"
  lockcf:
    block-cache-size: "256MB"
raftdb:
  defaultcf:
    block-cache-size: "2GB"
security:
import:
[tidb@tidb-center conf]$ 

官方文档:
rocksdb defaultcf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 30%
rocksdb writecf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 45%
rocksdb lockcf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 2.5% (最小 128 MB)
raftdb defaultcf block-cache-size(GB) = MEM * 80% / TiKV 实例数量 * 2.5% (最小 128 MB)

2 ))))))))))))))) 执行部署

[tidb@tidb-center tidb-ansible]$ pwd
/home/tidb/tidb-ansible
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ls
ansible.cfg            common_tasks        fact_files         library              requirements.txt            start_drainer.yml  templates
bootstrap.yml          conf                filter_plugins     LICENSE              retry_files                 start_spark.yml    unsafe_cleanup_container.yml
callback_plugins       create_users.yml    graceful_stop.yml  local_prepare.yml    roles                       start.yml          unsafe_cleanup_data.yml
clean_log_cron.yml     deploy_drainer.yml  group_vars         log                  rolling_update_monitor.yml  stop_drainer.yml   unsafe_cleanup.yml
cloud                  deploy_ntp.yml      hosts.ini          migrate_monitor.yml  rolling_update.yml          stop_spark.yml
collect_diagnosis.yml  deploy.yml          inventory.ini      README.md            scripts                     stop.yml
[tidb@tidb-center tidb-ansible]$ 

测试互信:

[tidb@tidb-center tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami'
10.158.1.122 | SUCCESS | rc=0 >>
tidb

TiKV-1-2 | SUCCESS | rc=0 >>
tidb

10.158.1.98 | SUCCESS | rc=0 >>
tidb

TiKV-1-1 | SUCCESS | rc=0 >>
tidb

TiKV-2-1 | SUCCESS | rc=0 >>
tidb

TiKV-2-2 | SUCCESS | rc=0 >>
tidb

10.158.1.125 | SUCCESS | rc=0 >>
tidb

10.158.1.123 | SUCCESS | rc=0 >>
tidb

10.158.1.126 | SUCCESS | rc=0 >>
tidb

TiKV-2-3 | SUCCESS | rc=0 >>
tidb

TiKV-1-3 | SUCCESS | rc=0 >>
tidb

10.158.1.127 | SUCCESS | rc=0 >>
tidb

[tidb@tidb-center tidb-ansible]$ 

测试SUDO免密码:

[tidb@tidb-center tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami' -b
10.158.1.122 | SUCCESS | rc=0 >>
root

10.158.1.123 | SUCCESS | rc=0 >>
root

10.158.1.98 | SUCCESS | rc=0 >>
root

TiKV-2-1 | SUCCESS | rc=0 >>
root

TiKV-1-1 | SUCCESS | rc=0 >>
root

TiKV-2-2 | SUCCESS | rc=0 >>
root

TiKV-1-2 | SUCCESS | rc=0 >>
root

10.158.1.125 | SUCCESS | rc=0 >>
root

10.158.1.126 | SUCCESS | rc=0 >>
root

TiKV-1-3 | SUCCESS | rc=0 >>
root

TiKV-2-3 | SUCCESS | rc=0 >>
root

10.158.1.127 | SUCCESS | rc=0 >>
root

[tidb@tidb-center tidb-ansible]$ 

联网下载TiDB的介质到中心服务器:

[tidb@tidb-center tidb-ansible]$ ansible-playbook local_prepare.yml

PLAY [do local preparation] *****************************************************************************************************************************************

TASK [local : Stop if ansible version is too low, make sure that the Ansible version is Ansible 2.4.2 or later, otherwise a compatibility issue occurs.] ************
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [local : create downloads and resources directories] ***********************************************************************************************************
changed: [localhost] => (item=/home/tidb/tidb-ansible/downloads)
changed: [localhost] => (item=/home/tidb/tidb-ansible/resources)
changed: [localhost] => (item=/home/tidb/tidb-ansible/resources/bin)

TASK [local : create cert directory] ********************************************************************************************************************************

TASK [local : create packages.yml] **********************************************************************************************************************************
changed: [localhost]

TASK [local : create specific deployment method packages.yml] *******************************************************************************************************
changed: [localhost]

TASK [local : include_vars] *****************************************************************************************************************************************
ok: [localhost]

TASK [local : include_vars] *****************************************************************************************************************************************
ok: [localhost]

TASK [local : detect outbound network] ******************************************************************************************************************************
ok: [localhost]

TASK [local : set outbound network fact] ****************************************************************************************************************************
ok: [localhost]

TASK [local : fail] *************************************************************************************************************************************************

TASK [local : detect GFW] *******************************************************************************************************************************************
ok: [localhost]

TASK [local : set GFW fact] *****************************************************************************************************************************************
ok: [localhost]

TASK [local : download tidb binary] *********************************************************************************************************************************

(等待下载完成。)

这个过程,可以看到下载目录的变化:

[root@tidb-center ~]# du -sh /home/tidb/tidb-ansible/downloads/
583M    /home/tidb/tidb-ansible/downloads/
[root@tidb-center ~]# 
[root@tidb-center ~]# du -sh /home/tidb/tidb-ansible/downloads/
807M    /home/tidb/tidb-ansible/downloads/
[root@tidb-center ~]# 
[root@tidb-center ~]# ls -ltr /home/tidb/tidb-ansible/downloads 
total 623424
-rw-rw-r-- 1 tidb tidb 273820544 Mar  3 14:02 tidb-v2.1.5.tar.gz
-rw-rw-r-- 1 tidb tidb   1548328 Mar  3 14:02 fio-3.8.tar.gz
-rw-rw-r-- 1 tidb tidb   4391029 Mar  3 14:02 grafana_collector-latest.tar.gz
-rw-rw-r-- 1 tidb tidb   3940228 Mar  3 14:02 kafka_exporter-1.1.0.tar.gz
-rw-rw-r-- 1 tidb tidb   3577327 Mar  3 14:02 tidb-insight.tar.gz
-rw-rw-r-- 1 tidb tidb  26438285 Mar  3 14:03 prometheus-2.2.1.tar.gz
-rw-rw-r-- 1 tidb tidb   9631079 Mar  3 14:03 alertmanager-0.14.0.tar.gz
-rw-rw-r-- 1 tidb tidb   5278094 Mar  3 14:03 node_exporter-0.15.2.tar.gz
-rw-rw-r-- 1 tidb tidb   3723718 Mar  3 14:03 pushgateway-0.4.0.tar.gz
-rw-rw-r-- 1 tidb tidb  47187988 Mar  3 14:03 grafana-4.6.3.tar.gz
-rw-rw-r-- 1 tidb tidb   4950085 Mar  3 14:03 blackbox_exporter-0.12.0.tar.gz
-rw-rw-r-- 1 tidb tidb 225875602 Mar  3 14:05 spark-2.3.2-bin-hadoop2.7.tgz
-rw-rw-r-- 1 tidb tidb  25053482 Mar  3 14:05 tispark-latest.tar.gz
-rw-rw-r-- 1 tidb tidb   2937369 Mar  3 14:05 tispark-sample-data.tar.gz
[root@tidb-center ~]# 

最后,执行完成:

[tidb@tidb-center tidb-ansible]$ ansible-playbook local_prepare.yml

PLAY [do local preparation] *****************************************************************************************************************************************

TASK [local : Stop if ansible version is too low, make sure that the Ansible version is Ansible 2.4.2 or later, otherwise a compatibility issue occurs.] ************
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [local : create downloads and resources directories] ***********************************************************************************************************
changed: [localhost] => (item=/home/tidb/tidb-ansible/downloads)
changed: [localhost] => (item=/home/tidb/tidb-ansible/resources)
changed: [localhost] => (item=/home/tidb/tidb-ansible/resources/bin)

TASK [local : create cert directory] ********************************************************************************************************************************

TASK [local : create packages.yml] **********************************************************************************************************************************
changed: [localhost]

TASK [local : create specific deployment method packages.yml] *******************************************************************************************************
changed: [localhost]

TASK [local : include_vars] *****************************************************************************************************************************************
ok: [localhost]

TASK [local : include_vars] *****************************************************************************************************************************************
ok: [localhost]

TASK [local : detect outbound network] ******************************************************************************************************************************
ok: [localhost]

TASK [local : set outbound network fact] ****************************************************************************************************************************
ok: [localhost]

TASK [local : fail] *************************************************************************************************************************************************

TASK [local : detect GFW] *******************************************************************************************************************************************
ok: [localhost]

TASK [local : set GFW fact] *****************************************************************************************************************************************
ok: [localhost]

TASK [local : download tidb binary] *********************************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-v2.1.5-linux-amd64.tar.gz', u'version': u'v2.1.5', u'name': u'tidb'})

TASK [local : download common binary] *******************************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/fio-3.8.tar.gz', u'checksum': u'sha256:15739abde7e74b59ac59df57f129b14fc5cd59e1e2eca2ce37b41f8c289c3d58', u'version': 3.8, u'name': u'fio'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/grafana_collector-latest-linux-amd64.tar.gz', u'version': u'latest', u'name': u'grafana_collector'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/kafka_exporter-1.1.0.linux-amd64.tar.gz', u'version': u'1.1.0', u'name': u'kafka_exporter'})

TASK [local : download diagnosis tools] *****************************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-insight-v0.2.5-1-g99b8fea.tar.gz', u'version': u'v0.2.5-1-g99b8fea', u'name': u'tidb-insight'})

TASK [local : download cfssl binary] ********************************************************************************************************************************

TASK [local : download cfssljson binary] ****************************************************************************************************************************

TASK [local : include_tasks] ****************************************************************************************************************************************
included: /home/tidb/tidb-ansible/roles/local/tasks/binary_deployment.yml for localhost

TASK [local : download other binary] ********************************************************************************************************************************

TASK [local : download other binary under gfw] **********************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/prometheus-2.2.1.linux-amd64.tar.gz', u'version': u'2.2.1', u'name': u'prometheus'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/alertmanager-0.14.0.linux-amd64.tar.gz', u'version': u'0.14.0', u'name': u'alertmanager'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/node_exporter-0.15.2.linux-amd64.tar.gz', u'version': u'0.15.2', u'name': u'node_exporter'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/pushgateway-0.4.0.linux-amd64.tar.gz', u'version': u'0.4.0', u'name': u'pushgateway'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/grafana-4.6.3.linux-x64.tar.gz', u'version': u'4.6.3', u'name': u'grafana'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/blackbox_exporter-0.12.0.linux-amd64.tar.gz', u'version': u'0.12.0', u'name': u'blackbox_exporter'})

TASK [local : download TiSpark packages] ****************************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/spark-2.3.2-bin-hadoop2.7.tgz', u'checksum': u'sha256:6246b20d95c7596a29fb26d5b50a3ae3163a35915bec6c515a8e183383bedc43', u'version': u'2.3.2', u'name': u'spark-2.3.2-bin-hadoop2.7.tgz'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tispark-latest-linux-amd64.tar.gz', u'version': u'latest', u'name': u'tispark-latest.tar.gz'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tispark-sample-data.tar.gz', u'version': u'latest', u'name': u'tispark-sample-data.tar.gz'})

TASK [local : unarchive third party binary] *************************************************************************************************************************
changed: [localhost] => (item={u'url': u'https://github.com/prometheus/prometheus/releases/download/v2.2.1/prometheus-2.2.1.linux-amd64.tar.gz', u'version': u'2.2.1', u'name': u'prometheus'})
changed: [localhost] => (item={u'url': u'https://github.com/prometheus/alertmanager/releases/download/v0.14.0/alertmanager-0.14.0.linux-amd64.tar.gz', u'version': u'0.14.0', u'name': u'alertmanager'})
changed: [localhost] => (item={u'url': u'https://github.com/prometheus/node_exporter/releases/download/v0.15.2/node_exporter-0.15.2.linux-amd64.tar.gz', u'version': u'0.15.2', u'name': u'node_exporter'})
changed: [localhost] => (item={u'url': u'https://github.com/prometheus/blackbox_exporter/releases/download/v0.12.0/blackbox_exporter-0.12.0.linux-amd64.tar.gz', u'version': u'0.12.0', u'name': u'blackbox_exporter'})
changed: [localhost] => (item={u'url': u'https://github.com/prometheus/pushgateway/releases/download/v0.4.0/pushgateway-0.4.0.linux-amd64.tar.gz', u'version': u'0.4.0', u'name': u'pushgateway'})
changed: [localhost] => (item={u'url': u'https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.3.linux-x64.tar.gz', u'version': u'4.6.3', u'name': u'grafana'})

TASK [local : unarchive tispark] ************************************************************************************************************************************
changed: [localhost]

TASK [local : unarchive tispark-sample-data] ************************************************************************************************************************
changed: [localhost]

TASK [local : cp monitoring binary] *********************************************************************************************************************************
changed: [localhost] => (item=alertmanager)
changed: [localhost] => (item=prometheus)
changed: [localhost] => (item=node_exporter)
changed: [localhost] => (item=pushgateway)
changed: [localhost] => (item=blackbox_exporter)

TASK [local : cp tispark] *******************************************************************************************************************************************
changed: [localhost]

TASK [local : cp tispark-sample-data] *******************************************************************************************************************************
changed: [localhost]

TASK [local : unarchive tidb binary] ********************************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-v2.1.5-linux-amd64.tar.gz', u'version': u'v2.1.5', u'name': u'tidb'})

TASK [local : unarchive common binary] ******************************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/fio-3.8.tar.gz', u'checksum': u'sha256:15739abde7e74b59ac59df57f129b14fc5cd59e1e2eca2ce37b41f8c289c3d58', u'version': 3.8, u'name': u'fio'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/grafana_collector-latest-linux-amd64.tar.gz', u'version': u'latest', u'name': u'grafana_collector'})
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/kafka_exporter-1.1.0.linux-amd64.tar.gz', u'version': u'1.1.0', u'name': u'kafka_exporter'})

TASK [local : cp tidb binary] ***************************************************************************************************************************************
changed: [localhost] => (item={u'url': u'http://download.pingcap.org/tidb-v2.1.5-linux-amd64.tar.gz', u'version': u'v2.1.5', u'name': u'tidb'})

TASK [local : cp fio binary] ****************************************************************************************************************************************
changed: [localhost] => (item=fio)

TASK [local : cp grafana_collector binary and fonts] ****************************************************************************************************************
changed: [localhost]

TASK [local : cp kafka_exporter binary] *****************************************************************************************************************************
changed: [localhost] => (item=kafka_exporter)

TASK [local : cp daemontools binary] ********************************************************************************************************************************

TASK [local : cp tidb-insight tarball] ******************************************************************************************************************************
changed: [localhost]

TASK [local : clean up download dir] ********************************************************************************************************************************
changed: [localhost]

PLAY RECAP **********************************************************************************************************************************************************
localhost                  : ok=30   changed=22   unreachable=0    failed=0   

Congrats! All goes well. :-)
[tidb@tidb-center tidb-ansible]$ 

初始化系统环境:

[tidb@tidb-center tidb-ansible]$ ansible-playbook bootstrap.yml 

(过多的输出。)

TASK [pre-ansible : set has_python facts] ***************************************************************************************************************************

TASK [pre-ansible : include_tasks] **********************************************************************************************************************************

TASK [pre-ansible : include_tasks] **********************************************************************************************************************************
included: /home/tidb/tidb-ansible/roles/pre-ansible/tasks/root_tasks.yml for 10.158.1.98, 10.158.1.122, 10.158.1.123, 10.158.1.125, 10.158.1.126, 10.158.1.127, TiKV-1-1, TiKV-2-1, TiKV-1-2, TiKV-2-2, TiKV-1-3, TiKV-2-3

TASK [pre-ansible : Debian/Ubuntu - install python] *****************************************************************************************************************

TASK [pre-ansible : Redhat/CentOS - install python] *****************************************************************************************************************

TASK [pre-ansible : Redhat/CentOS - Make sure ntp, ntpstat have been installed] *************************************************************************************

TASK [pre-ansible : Debian/Ubuntu - Make sure ntp, ntpstat have been installed] *************************************************************************************

TASK [bootstrap : gather facts] *************************************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.126]
ok: [10.158.1.123]
ok: [10.158.1.98]
ok: [10.158.1.125]
ok: [10.158.1.127]
ok: [TiKV-1-2]
ok: [TiKV-1-1]
ok: [TiKV-2-2]
ok: [TiKV-2-1]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : group hosts by distribution] **********************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : Set deploy_dir if not presented] ******************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]

TASK [bootstrap : include_tasks] ************************************************************************************************************************************
included: /home/tidb/tidb-ansible/roles/bootstrap/tasks/root_tasks.yml for 10.158.1.98, 10.158.1.122, 10.158.1.123, 10.158.1.125, 10.158.1.126, 10.158.1.127, TiKV-1-1, TiKV-2-1, TiKV-1-2, TiKV-2-2, TiKV-1-3, TiKV-2-3

TASK [bootstrap : setting absent kernel params] *********************************************************************************************************************
changed: [10.158.1.98] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [10.158.1.122] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [10.158.1.123] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [10.158.1.126] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [10.158.1.125] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [10.158.1.127] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [TiKV-1-2] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [TiKV-2-2] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [TiKV-1-3] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [TiKV-1-1] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [TiKV-2-3] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})
changed: [TiKV-2-1] => (item={u'name': u'net.ipv4.tcp_tw_recycle', u'value': 0})

TASK [bootstrap : setting present kernel params] ********************************************************************************************************************
ok: [10.158.1.98] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [10.158.1.122] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [10.158.1.123] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [10.158.1.125] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [10.158.1.126] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [10.158.1.98] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [10.158.1.122] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [10.158.1.123] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [10.158.1.126] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [10.158.1.125] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [10.158.1.98] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [10.158.1.122] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [10.158.1.123] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [10.158.1.126] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [10.158.1.98] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [10.158.1.125] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [10.158.1.122] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [10.158.1.123] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [10.158.1.126] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [10.158.1.127] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [10.158.1.125] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [TiKV-1-1] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [TiKV-1-2] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [TiKV-2-1] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [10.158.1.127] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [TiKV-2-2] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [TiKV-1-1] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [TiKV-1-2] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [10.158.1.127] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [TiKV-2-1] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [TiKV-2-2] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [TiKV-1-1] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [TiKV-1-2] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [10.158.1.127] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [TiKV-2-1] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [TiKV-2-2] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [TiKV-1-1] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [TiKV-1-2] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [TiKV-1-3] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [TiKV-2-2] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [TiKV-2-1] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [TiKV-2-3] => (item={u'name': u'net.core.somaxconn', u'value': 32768})
ok: [TiKV-1-3] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [TiKV-2-3] => (item={u'name': u'vm.swappiness', u'value': 0})
ok: [TiKV-1-3] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [TiKV-2-3] => (item={u'name': u'net.ipv4.tcp_syncookies', u'value': 0})
ok: [TiKV-1-3] => (item={u'name': u'fs.file-max', u'value': 1000000})
ok: [TiKV-2-3] => (item={u'name': u'fs.file-max', u'value': 1000000})

TASK [bootstrap : update /etc/security/limits.conf] *****************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.98]
ok: [10.158.1.125]
ok: [10.158.1.123]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : disable swap] *************************************************************************************************************************************
changed: [10.158.1.123]

TASK [bootstrap : create group] *************************************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.98]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : create account] ***********************************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.98]
ok: [10.158.1.126]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-2]
ok: [TiKV-1-2]
ok: [TiKV-2-1]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : create top deploy dir when under root] ************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.98]
ok: [10.158.1.123]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : create wal_dir deploy dir when under root] ********************************************************************************************************

TASK [bootstrap : create raftdb_path deploy dir when under root] ****************************************************************************************************

TASK [bootstrap : set hostname if hostname is not distinguishable] **************************************************************************************************

TASK [bootstrap : set hostname in hosts file] ***********************************************************************************************************************

TASK [bootstrap : determine if firewalld is running] ****************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.126]
ok: [10.158.1.125]
ok: [10.158.1.123]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-1-2]
ok: [TiKV-2-1]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : disable firewalld] ********************************************************************************************************************************

TASK [bootstrap : or to enable firewalld] ***************************************************************************************************************************

TASK [bootstrap : check centos configuration file exists] ***********************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.98]
ok: [10.158.1.126]
ok: [10.158.1.125]
ok: [10.158.1.127]
ok: [TiKV-1-2]
ok: [TiKV-2-1]
ok: [TiKV-1-1]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : check debian configuration file exists] ***********************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : modify centos irqbalance configuration file] ******************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.126]
ok: [10.158.1.98]
ok: [10.158.1.125]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [bootstrap : modify debian irqbalance configuration file] ******************************************************************************************************

TASK [bootstrap : start irqbalance service] *************************************************************************************************************************
ok: [10.158.1.123]
ok: [10.158.1.122]
ok: [10.158.1.126]
ok: [10.158.1.98]
ok: [10.158.1.125]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-2-1]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

PLAY [check system] *************************************************************************************************************************************************

TASK [check_system_static : Disk space check - Fail task when disk is full] *****************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [check_system_static : get facts] ******************************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.126]
ok: [10.158.1.125]
ok: [10.158.1.127]
ok: [TiKV-1-2]
ok: [TiKV-1-1]
ok: [TiKV-2-2]
ok: [TiKV-2-1]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [check_system_static : Preflight check - Linux OS family and distribution version] *****************************************************************************

TASK [check_system_static : Deploy check_cpufreq script] ************************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]
changed: [10.158.1.123]
changed: [10.158.1.125]
changed: [10.158.1.126]
changed: [10.158.1.127]
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [check_system_static : Preflight check - Check CPUfreq governors available in the kernel] **********************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.123]
changed: [10.158.1.126]
changed: [10.158.1.125]
changed: [10.158.1.127]
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [check_system_static : Preflight check - Check the currently active governor] **********************************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.123]
changed: [10.158.1.126]
changed: [10.158.1.125]
changed: [10.158.1.127]
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [check_system_static : Preflight check - Fail when CPU frequency governor is not set to performance mode] ******************************************************

TASK [check_system_static : Clean check_cpufreq script] *************************************************************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.123]
changed: [10.158.1.125]
changed: [10.158.1.126]
changed: [10.158.1.127]
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [check_system_static : Preflight check - Check Linux kernel overcommit_memory parameter] ***********************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.123]
changed: [10.158.1.126]
changed: [10.158.1.125]
changed: [10.158.1.127]
changed: [TiKV-1-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-2-1]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [check_system_static : Preflight check - Fail when Linux kernel vm.overcommit_memory parameter is set to 2] ****************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's CPU] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check TiKV server's CPU] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check PD server's CPU] **********************************************************************************************

TASK [check_system_optional : Preflight check - Check Monitor server's CPU] *****************************************************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's RAM] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check TiKV server's RAM] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check PD server's RAM] **********************************************************************************************

TASK [check_system_optional : Preflight check - Check Monitor server's RAM] *****************************************************************************************

TASK [check_system_optional : Set deploy_dir if not presented] ******************************************************************************************************

TASK [check_system_optional : Determine which mountpoint deploy dir exists on] **************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.126]
ok: [10.158.1.125]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-1-2]
ok: [TiKV-2-1]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [check_system_optional : set_fact] *****************************************************************************************************************************
ok: [10.158.1.98] => (item={u'block_used': 1043183, u'uuid': u'c4ef80aa-c386-4cc7-bdfe-953aca40a7a6', u'size_total': 61169209344, u'block_total': 14933889, u'mount': u'/', u'block_available': 13890706, u'size_available': 56896331776, u'fstype': u'xfs', u'inode_total': 59764736, u'inode_available': 59632155, u'device': u'/dev/mapper/ol_tidb--1-root', u'inode_used': 132581, u'block_size': 4096, u'options': u'rw,relatime,attr2,inode64,noquota'})
ok: [10.158.1.122] => (item={u'block_used': 1042957, u'uuid': u'28c2e5dc-879f-40d1-8b48-511b1c271cc8', u'size_total': 61169209344, u'block_total': 14933889, u'mount': u'/', u'block_available': 13890932, u'size_available': 56897257472, u'fstype': u'xfs', u'inode_total': 59764736, u'inode_available': 59632176, u'device': u'/dev/mapper/ol-root', u'inode_used': 132560, u'block_size': 4096, u'options': u'rw,relatime,attr2,inode64,noquota'})

TASK [check_system_optional : Preflight check - Check TiDB server's disk space] *************************************************************************************
ok: [10.158.1.98] => {
    "msg": "The file system mounted at / does not meet minimum disk space requirement: at least 500.0 GB."
}
ok: [10.158.1.122] => {
    "msg": "The file system mounted at / does not meet minimum disk space requirement: at least 500.0 GB."
}

TASK [check_system_optional : set_fact] *****************************************************************************************************************************
ok: [TiKV-1-1] => (item={u'block_used': 1330186, u'uuid': u'5e39bd86-d953-460c-9f4e-eeb2b9893e93', u'size_total': 105554829312, u'block_total': 25770222, u'mount': u'/tidb_data_1', u'block_available': 24440036, u'size_available': 100106387456, u'fstype': u'ext4', u'inode_total': 6553600, u'inode_available': 6553585, u'device': u'/dev/sdb', u'inode_used': 15, u'block_size': 4096, u'options': u'rw,noatime,nodelalloc,data=ordered'})
ok: [TiKV-2-1] => (item={u'block_used': 1330186, u'uuid': u'5e39bd86-d953-460c-9f4e-eeb2b9893e93', u'size_total': 105554829312, u'block_total': 25770222, u'mount': u'/tidb_data_1', u'block_available': 24440036, u'size_available': 100106387456, u'fstype': u'ext4', u'inode_total': 6553600, u'inode_available': 6553585, u'device': u'/dev/sdb', u'inode_used': 15, u'block_size': 4096, u'options': u'rw,noatime,nodelalloc,data=ordered'})
ok: [TiKV-1-2] => (item={u'block_used': 1330186, u'uuid': u'3a4de99c-451a-4752-a5b3-50ffdc99840e', u'size_total': 105554829312, u'block_total': 25770222, u'mount': u'/tidb_data_1', u'block_available': 24440036, u'size_available': 100106387456, u'fstype': u'ext4', u'inode_total': 6553600, u'inode_available': 6553585, u'device': u'/dev/sdb', u'inode_used': 15, u'block_size': 4096, u'options': u'rw,noatime,nodelalloc,data=ordered'})
ok: [TiKV-2-2] => (item={u'block_used': 1330186, u'uuid': u'3a4de99c-451a-4752-a5b3-50ffdc99840e', u'size_total': 105554829312, u'block_total': 25770222, u'mount': u'/tidb_data_1', u'block_available': 24440036, u'size_available': 100106387456, u'fstype': u'ext4', u'inode_total': 6553600, u'inode_available': 6553585, u'device': u'/dev/sdb', u'inode_used': 15, u'block_size': 4096, u'options': u'rw,noatime,nodelalloc,data=ordered'})
ok: [TiKV-1-3] => (item={u'block_used': 1330186, u'uuid': u'7d7c0bac-1487-4abe-bcaf-6b3a3a15f78d', u'size_total': 105554829312, u'block_total': 25770222, u'mount': u'/tidb_data_1', u'block_available': 24440036, u'size_available': 100106387456, u'fstype': u'ext4', u'inode_total': 6553600, u'inode_available': 6553585, u'device': u'/dev/sdb', u'inode_used': 15, u'block_size': 4096, u'options': u'rw,noatime,nodelalloc,data=ordered'})
ok: [TiKV-2-3] => (item={u'block_used': 1330186, u'uuid': u'7d7c0bac-1487-4abe-bcaf-6b3a3a15f78d', u'size_total': 105554829312, u'block_total': 25770222, u'mount': u'/tidb_data_1', u'block_available': 24440036, u'size_available': 100106387456, u'fstype': u'ext4', u'inode_total': 6553600, u'inode_available': 6553585, u'device': u'/dev/sdb', u'inode_used': 15, u'block_size': 4096, u'options': u'rw,noatime,nodelalloc,data=ordered'})

TASK [check_system_optional : Preflight check - Check TiKV server's disk space] *************************************************************************************
ok: [TiKV-1-1] => {
    "msg": "The file system mounted at /tidb_data_1 does not meet minimum disk space requirement: at least 500.0 GB."
}
ok: [TiKV-2-1] => {
    "msg": "The file system mounted at /tidb_data_1 does not meet minimum disk space requirement: at least 500.0 GB."
}
ok: [TiKV-1-2] => {
    "msg": "The file system mounted at /tidb_data_1 does not meet minimum disk space requirement: at least 500.0 GB."
}
ok: [TiKV-2-2] => {
    "msg": "The file system mounted at /tidb_data_1 does not meet minimum disk space requirement: at least 500.0 GB."
}
ok: [TiKV-1-3] => {
    "msg": "The file system mounted at /tidb_data_1 does not meet minimum disk space requirement: at least 500.0 GB."
}
ok: [TiKV-2-3] => {
    "msg": "The file system mounted at /tidb_data_1 does not meet minimum disk space requirement: at least 500.0 GB."
}

TASK [check_system_optional : set_fact] *****************************************************************************************************************************
ok: [10.158.1.98] => (item={u'block_used': 1043183, u'uuid': u'c4ef80aa-c386-4cc7-bdfe-953aca40a7a6', u'size_total': 61169209344, u'block_total': 14933889, u'mount': u'/', u'block_available': 13890706, u'size_available': 56896331776, u'fstype': u'xfs', u'inode_total': 59764736, u'inode_available': 59632155, u'device': u'/dev/mapper/ol_tidb--1-root', u'inode_used': 132581, u'block_size': 4096, u'options': u'rw,relatime,attr2,inode64,noquota'})
ok: [10.158.1.122] => (item={u'block_used': 1042957, u'uuid': u'28c2e5dc-879f-40d1-8b48-511b1c271cc8', u'size_total': 61169209344, u'block_total': 14933889, u'mount': u'/', u'block_available': 13890932, u'size_available': 56897257472, u'fstype': u'xfs', u'inode_total': 59764736, u'inode_available': 59632176, u'device': u'/dev/mapper/ol-root', u'inode_used': 132560, u'block_size': 4096, u'options': u'rw,relatime,attr2,inode64,noquota'})
ok: [10.158.1.123] => (item={u'block_used': 1042962, u'uuid': u'87513e8e-5ceb-4364-a2f0-763626c0c074', u'size_total': 61169209344, u'block_total': 14933889, u'mount': u'/', u'block_available': 13890927, u'size_available': 56897236992, u'fstype': u'xfs', u'inode_total': 59764736, u'inode_available': 59632157, u'device': u'/dev/mapper/ol-root', u'inode_used': 132579, u'block_size': 4096, u'options': u'rw,relatime,attr2,inode64,noquota'})

TASK [check_system_optional : Preflight check - Check PD server's disk space] ***************************************************************************************
ok: [10.158.1.98] => {
    "msg": "The file system mounted at / does not meet minimum disk space requirement: at least 200.0 GB."
}
ok: [10.158.1.122] => {
    "msg": "The file system mounted at / does not meet minimum disk space requirement: at least 200.0 GB."
}
ok: [10.158.1.123] => {
    "msg": "The file system mounted at / does not meet minimum disk space requirement: at least 200.0 GB."
}

TASK [check_system_optional : set_fact] *****************************************************************************************************************************
ok: [10.158.1.98] => (item={u'block_used': 1043183, u'uuid': u'c4ef80aa-c386-4cc7-bdfe-953aca40a7a6', u'size_total': 61169209344, u'block_total': 14933889, u'mount': u'/', u'block_available': 13890706, u'size_available': 56896331776, u'fstype': u'xfs', u'inode_total': 59764736, u'inode_available': 59632155, u'device': u'/dev/mapper/ol_tidb--1-root', u'inode_used': 132581, u'block_size': 4096, u'options': u'rw,relatime,attr2,inode64,noquota'})

TASK [check_system_optional : Preflight check - Check Monitor server's disk space] **********************************************************************************
ok: [10.158.1.98] => {
    "msg": "The file system mounted at / does not meet minimum disk space requirement: at least 500.0 GB."
}

PLAY [tikv_servers machine benchmark] *******************************************************************************************************************************

TASK [machine_benchmark : create fio and tikv data directories] *****************************************************************************************************
changed: [TiKV-1-1] => (item=/tidb_data_1/data1/deploy/data)
changed: [TiKV-2-1] => (item=/tidb_data_1/data2/deploy/data)
changed: [TiKV-1-2] => (item=/tidb_data_1/data1/deploy/data)
changed: [TiKV-2-2] => (item=/tidb_data_1/data2/deploy/data)
changed: [TiKV-1-3] => (item=/tidb_data_1/data1/deploy/data)
changed: [TiKV-1-1] => (item=/tidb_data_1/data1/deploy/data/fio)
changed: [TiKV-2-1] => (item=/tidb_data_1/data2/deploy/data/fio)
changed: [TiKV-1-2] => (item=/tidb_data_1/data1/deploy/data/fio)
changed: [TiKV-2-2] => (item=/tidb_data_1/data2/deploy/data/fio)
changed: [TiKV-1-3] => (item=/tidb_data_1/data1/deploy/data/fio)
changed: [TiKV-2-3] => (item=/tidb_data_1/data2/deploy/data)
changed: [TiKV-2-3] => (item=/tidb_data_1/data2/deploy/data/fio)

TASK [machine_benchmark : deploy fio binary] ************************************************************************************************************************
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-3]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-2-3]

TASK [machine_benchmark : deploy parse_fio_output.py script] ********************************************************************************************************
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [machine_benchmark : include_tasks] ****************************************************************************************************************************
included: /home/tidb/tidb-ansible/roles/machine_benchmark/tasks/fio_randread.yml for TiKV-1-1, TiKV-2-1, TiKV-1-2, TiKV-2-2, TiKV-1-3, TiKV-2-3

TASK [machine_benchmark : fio randread benchmark on tikv_data_dir disk] *********************************************************************************************

(等待一段时间,...)


这个过程包含了系统环境检查。

可能遇到的问题:
1) CPU个数
比如我的环境,因为系统CPU内核没有达到8个的最小要求,所以出现了告警:

TASK [check_system_static : Preflight check - Check Linux kernel overcommit_memory parameter] ***********************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]
changed: [10.158.1.123]
changed: [10.158.1.125]
changed: [10.158.1.126]
changed: [10.158.1.127]
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [check_system_static : Preflight check - Fail when Linux kernel vm.overcommit_memory parameter is set to 2] ****************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's CPU] ********************************************************************************************
fatal: [10.158.1.98]: FAILED! => {"changed": false, "msg": "This machine does not have sufficient CPU to run TiDB, at least 8 cores."}
fatal: [10.158.1.122]: FAILED! => {"changed": false, "msg": "This machine does not have sufficient CPU to run TiDB, at least 8 cores."}

NO MORE HOSTS LEFT **************************************************************************************************************************************************
        to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/bootstrap.retry

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=29   changed=12   unreachable=0    failed=1   
10.158.1.123               : ok=29   changed=12   unreachable=0    failed=0   
10.158.1.125               : ok=29   changed=12   unreachable=0    failed=0   
10.158.1.126               : ok=29   changed=12   unreachable=0    failed=0   
10.158.1.127               : ok=29   changed=12   unreachable=0    failed=0   
10.158.1.98                : ok=29   changed=12   unreachable=0    failed=1   
TiKV-1-1                   : ok=28   changed=9    unreachable=0    failed=0   
TiKV-1-2                   : ok=28   changed=9    unreachable=0    failed=0   
TiKV-1-3                   : ok=28   changed=9    unreachable=0    failed=0   
TiKV-2-1                   : ok=28   changed=9    unreachable=0    failed=0   
TiKV-2-2                   : ok=28   changed=9    unreachable=0    failed=0   
TiKV-2-3                   : ok=28   changed=9    unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[10.158.1.98]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server's CPU; message: {"changed": false, "msg": "This machine does not have sufficient CPU to run TiDB, at least 8 cores."}

[10.158.1.122]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server's CPU; message: {"changed": false, "msg": "This machine does not have sufficient CPU to run TiDB, at least 8 cores."}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

解法:
修改CPU个数,然后再运行。

2) 内存大小
内存需要16G以上

TiDB

TASK [check_system_optional : Preflight check - Check TiDB server's RAM] ********************************************************************************************
fatal: [10.158.1.98]: FAILED! => {"changed": false, "msg": "This machine does not have sufficient RAM to run TiDB, at least 16000 MB."}
fatal: [10.158.1.122]: FAILED! => {"changed": false, "msg": "This machine does not have sufficient RAM to run TiDB, at least 16000 MB."}

NO MORE HOSTS LEFT **************************************************************************************************************************************************
        to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/bootstrap.retry

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=29   changed=7    unreachable=0    failed=1   
10.158.1.123               : ok=29   changed=7    unreachable=0    failed=0   
10.158.1.125               : ok=29   changed=7    unreachable=0    failed=0   
10.158.1.126               : ok=29   changed=7    unreachable=0    failed=0   
10.158.1.127               : ok=29   changed=7    unreachable=0    failed=0   
10.158.1.98                : ok=29   changed=7    unreachable=0    failed=1   
TiKV-1-1                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-1-2                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-1-3                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-2-1                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-2-2                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-2-3                   : ok=28   changed=7    unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[10.158.1.98]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiDB, at least 16000 MB."}

[10.158.1.122]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiDB server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiDB, at least 16000 MB."}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

TiKV

10.158.1.127               : ok=28   changed=6    unreachable=0    failed=0   
10.158.1.98                : ok=29   changed=7    unreachable=0    failed=0   
TiKV-1-1                   : ok=27   changed=6    unreachable=0    failed=1   
TiKV-1-2                   : ok=27   changed=6    unreachable=0    failed=1   
TiKV-1-3                   : ok=27   changed=6    unreachable=0    failed=1   
TiKV-2-1                   : ok=27   changed=6    unreachable=0    failed=1   
TiKV-2-2                   : ok=27   changed=6    unreachable=0    failed=1   
TiKV-2-3                   : ok=27   changed=6    unreachable=0    failed=1   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[TiKV-1-1]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiKV server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiKV, at least 16000 MB."}

[TiKV-2-1]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiKV server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiKV, at least 16000 MB."}

[TiKV-1-2]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiKV server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiKV, at least 16000 MB."}

[TiKV-2-2]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiKV server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiKV, at least 16000 MB."}

[TiKV-1-3]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiKV server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiKV, at least 16000 MB."}

[TiKV-2-3]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check TiKV server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run TiKV, at least 16000 MB."}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

PD

TASK [check_system_static : Preflight check - Fail when Linux kernel vm.overcommit_memory parameter is set to 2] ****************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's CPU] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check TiKV server's CPU] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check PD server's CPU] **********************************************************************************************

TASK [check_system_optional : Preflight check - Check Monitor server's CPU] *****************************************************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's RAM] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check TiKV server's RAM] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check PD server's RAM] **********************************************************************************************
fatal: [10.158.1.123]: FAILED! => {"changed": false, "msg": "This machine does not have sufficient RAM to run PD, at least 8000 MB."}

NO MORE HOSTS LEFT **************************************************************************************************************************************************
        to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/bootstrap.retry

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=28   changed=6    unreachable=0    failed=0   
10.158.1.123               : ok=28   changed=6    unreachable=0    failed=1   
10.158.1.125               : ok=29   changed=7    unreachable=0    failed=0   
10.158.1.126               : ok=29   changed=7    unreachable=0    failed=0   
10.158.1.127               : ok=29   changed=7    unreachable=0    failed=0   
10.158.1.98                : ok=28   changed=6    unreachable=0    failed=0   
TiKV-1-1                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-1-2                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-1-3                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-2-1                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-2-2                   : ok=28   changed=7    unreachable=0    failed=0   
TiKV-2-3                   : ok=28   changed=7    unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[10.158.1.123]: Ansible FAILED! => playbook: bootstrap.yml; TASK: check_system_optional : Preflight check - Check PD server's RAM; message: {"changed": false, "msg": "This machine does not have sufficient RAM to run PD, at least 8000 MB."}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

解法:
将所有节点服务器的内存调整到16G以上。

3)磁盘,SSD
错误如下:

TASK [machine_benchmark : deploy parse_fio_output.py script] ********************************************************************************************************
changed: [TiKV-1-1]
changed: [TiKV-2-1]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [machine_benchmark : include_tasks] ****************************************************************************************************************************
included: /home/tidb/tidb-ansible/roles/machine_benchmark/tasks/fio_randread.yml for TiKV-1-1, TiKV-2-1, TiKV-1-2, TiKV-2-2, TiKV-1-3, TiKV-2-3

TASK [machine_benchmark : fio randread benchmark on tikv_data_dir disk] *********************************************************************************************
changed: [TiKV-1-3]
changed: [TiKV-1-2]
changed: [TiKV-2-2]
changed: [TiKV-2-3]
changed: [TiKV-1-1]
changed: [TiKV-2-1]

TASK [machine_benchmark : clean fio randread benchmark temporary file] **********************************************************************************************
changed: [TiKV-2-1]
changed: [TiKV-1-1]
changed: [TiKV-1-3]
changed: [TiKV-2-3]
changed: [TiKV-1-2]
changed: [TiKV-2-2]

TASK [machine_benchmark : get fio randread iops] ********************************************************************************************************************
changed: [TiKV-1-2]
changed: [TiKV-2-1]
changed: [TiKV-1-1]
changed: [TiKV-2-2]
changed: [TiKV-2-3]
changed: [TiKV-1-3]

TASK [machine_benchmark : get fio randread summary] *****************************************************************************************************************
changed: [TiKV-1-1]
changed: [TiKV-1-2]
changed: [TiKV-2-1]
changed: [TiKV-2-2]
changed: [TiKV-1-3]
changed: [TiKV-2-3]

TASK [machine_benchmark : fio randread benchmark command] ***********************************************************************************************************
ok: [TiKV-1-1] => {
    "msg": "fio randread benchmark command: cd /tidb_data_1/data1/deploy/data/fio && ./fio -ioengine=psync -bs=32k -fdatasync=1 -thread -rw=randread -size=10G -filename=fio_randread_test.txt -name='fio randread test' -iodepth=4 -runtime=60 -numjobs=4 -group_reporting --output-format=json --output=fio_randread_result.json."
}

TASK [machine_benchmark : fio randread benchmark summary] ***********************************************************************************************************
ok: [TiKV-1-1] => {
    "msg": "fio randread benchmark summary: jobname: fio randread test\nread: IOPS=200\nlat (ns): min=11685, max=467072640, avg=19918521\nclat percentiles (ns): 95.00th=46399488, 99.00th=126353408."
}
ok: [TiKV-2-1] => {
    "msg": "fio randread benchmark summary: jobname: fio randread test\nread: IOPS=203\nlat (ns): min=13499, max=491979884, avg=19661249\nclat percentiles (ns): 95.00th=45350912, 99.00th=124256256."
}
ok: [TiKV-1-2] => {
    "msg": "fio randread benchmark summary: jobname: fio randread test\nread: IOPS=104\nlat (ns): min=14876, max=1152280237, avg=38289840\nclat percentiles (ns): 95.00th=126353408, 99.00th=244318208."
}
ok: [TiKV-2-2] => {
    "msg": "fio randread benchmark summary: jobname: fio randread test\nread: IOPS=107\nlat (ns): min=14292, max=890097109, avg=37103308\nclat percentiles (ns): 95.00th=127401984, 99.00th=240123904."
}
ok: [TiKV-1-3] => {
    "msg": "fio randread benchmark summary: jobname: fio randread test\nread: IOPS=158\nlat (ns): min=13291, max=571300684, avg=25272504\nclat percentiles (ns): 95.00th=90701824, 99.00th=198180864."
}
ok: [TiKV-2-3] => {
    "msg": "fio randread benchmark summary: jobname: fio randread test\nread: IOPS=145\nlat (ns): min=13691, max=758452355, avg=27557242\nclat percentiles (ns): 95.00th=76021760, 99.00th=154140672."
}

TASK [machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement] **********************************************************
fatal: [TiKV-1-1]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 200 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}
fatal: [TiKV-2-1]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 203 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}
fatal: [TiKV-1-2]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 104 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}
fatal: [TiKV-2-2]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 107 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}
fatal: [TiKV-1-3]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 158 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}
fatal: [TiKV-2-3]: FAILED! => {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 145 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}
        to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/bootstrap.retry

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=33   changed=6    unreachable=0    failed=0   
10.158.1.123               : ok=32   changed=7    unreachable=0    failed=0   
10.158.1.125               : ok=29   changed=6    unreachable=0    failed=0   
10.158.1.126               : ok=29   changed=6    unreachable=0    failed=0   
10.158.1.127               : ok=29   changed=6    unreachable=0    failed=0   
10.158.1.98                : ok=35   changed=6    unreachable=0    failed=0   
TiKV-1-1                   : ok=40   changed=13   unreachable=0    failed=1   
TiKV-1-2                   : ok=39   changed=13   unreachable=0    failed=1   
TiKV-1-3                   : ok=39   changed=13   unreachable=0    failed=1   
TiKV-2-1                   : ok=39   changed=13   unreachable=0    failed=1   
TiKV-2-2                   : ok=39   changed=13   unreachable=0    failed=1   
TiKV-2-3                   : ok=39   changed=13   unreachable=0    failed=1   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[TiKV-1-1]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 200 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

[TiKV-2-1]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 203 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

[TiKV-1-2]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 104 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

[TiKV-2-2]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 107 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

[TiKV-1-3]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 158 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

[TiKV-2-3]: Ansible FAILED! => playbook: bootstrap.yml; TASK: machine_benchmark : Preflight check - Does fio randread iops of tikv_data_dir disk meet requirement; message: {"changed": false, "msg": "fio: randread iops of tikv_data_dir disk is too low: 145 < 40000, it is strongly recommended to use SSD disks for TiKV and PD, or there might be performance issues."}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

FIO测试下来,发现磁盘速度太慢,针对TiKV与PD的服务器,最好使用SSD的磁盘。

解法:
如果无法更换硬件,则可以在调用脚本的时候跳过磁盘性能(disk performance benchmark)检查:

[tidb@tidb-center tidb-ansible]$ ansible-playbook bootstrap.yml --extra-vars "dev_mode=True"

(过多的输出,... )

最后,这一阶段成功的样子:

[tidb@tidb-center tidb-ansible]$ ansible-playbook bootstrap.yml --extra-vars "dev_mode=True"

(过多的输出,... )

TASK [check_system_static : Preflight check - Fail when Linux kernel vm.overcommit_memory parameter is set to 2] ****************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's CPU] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check TiKV server's CPU] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check PD server's CPU] **********************************************************************************************

TASK [check_system_optional : Preflight check - Check Monitor server's CPU] *****************************************************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's RAM] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check TiKV server's RAM] ********************************************************************************************

TASK [check_system_optional : Preflight check - Check PD server's RAM] **********************************************************************************************

TASK [check_system_optional : Preflight check - Check Monitor server's RAM] *****************************************************************************************

TASK [check_system_optional : Set deploy_dir if not presented] ******************************************************************************************************

TASK [check_system_optional : Determine which mountpoint deploy dir exists on] **************************************************************************************

TASK [check_system_optional : set_fact] *****************************************************************************************************************************

TASK [check_system_optional : Preflight check - Check TiDB server's disk space] *************************************************************************************

TASK [check_system_optional : set_fact] *****************************************************************************************************************************

TASK [check_system_optional : Preflight check - Check TiKV server's disk space] *************************************************************************************

TASK [check_system_optional : set_fact] *****************************************************************************************************************************

TASK [check_system_optional : Preflight check - Check PD server's disk space] ***************************************************************************************

TASK [check_system_optional : set_fact] *****************************************************************************************************************************

TASK [check_system_optional : Preflight check - Check Monitor server's disk space] **********************************************************************************

PLAY [tikv_servers machine benchmark] *******************************************************************************************************************************

TASK [machine_benchmark : create fio and tikv data directories] *****************************************************************************************************

TASK [machine_benchmark : deploy fio binary] ************************************************************************************************************************

TASK [machine_benchmark : deploy parse_fio_output.py script] ********************************************************************************************************

TASK [machine_benchmark : include_tasks] ****************************************************************************************************************************

TASK [machine_benchmark : include_tasks] ****************************************************************************************************************************

TASK [machine_benchmark : include_tasks] ****************************************************************************************************************************

PLAY [create ops scripts] *******************************************************************************************************************************************

TASK [ops : create check_tikv.sh script] ****************************************************************************************************************************
changed: [localhost]

TASK [ops : create pd-ctl.sh script] ********************************************************************************************************************************
changed: [localhost]

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=28   changed=6    unreachable=0    failed=0   
10.158.1.123               : ok=28   changed=6    unreachable=0    failed=0   
10.158.1.125               : ok=28   changed=6    unreachable=0    failed=0   
10.158.1.126               : ok=28   changed=6    unreachable=0    failed=0   
10.158.1.127               : ok=28   changed=6    unreachable=0    failed=0   
10.158.1.98                : ok=28   changed=6    unreachable=0    failed=0   
TiKV-1-1                   : ok=27   changed=6    unreachable=0    failed=0   
TiKV-1-2                   : ok=27   changed=6    unreachable=0    failed=0   
TiKV-1-3                   : ok=27   changed=6    unreachable=0    failed=0   
TiKV-2-1                   : ok=27   changed=6    unreachable=0    failed=0   
TiKV-2-2                   : ok=27   changed=6    unreachable=0    failed=0   
TiKV-2-3                   : ok=27   changed=6    unreachable=0    failed=0   
localhost                  : ok=3    changed=2    unreachable=0    failed=0   

Congrats! All goes well. :-)
[tidb@tidb-center tidb-ansible]$ 

接下来,就可以部署TiDB的集群软件了:

[tidb@tidb-center tidb-ansible]$ ansible-playbook deploy.yml

(过多的输出,...)

TASK [tidb : include_tasks] *****************************************************************************************************************************************
included: /home/tidb/tidb-ansible/roles/tidb/tasks/systemd_deployment.yml for 10.158.1.98, 10.158.1.122

TASK [include_role : systemd] ***************************************************************************************************************************************

TASK [systemd : create systemd service configuration] ***************************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]

TASK [systemd : create startup script - common start/stop] **********************************************************************************************************
changed: [10.158.1.122] => (item=start)
changed: [10.158.1.98] => (item=start)
changed: [10.158.1.122] => (item=stop)
changed: [10.158.1.98] => (item=stop)

TASK [systemd : reload systemd] *************************************************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]

TASK [tidb : prepare firewalld white list] **************************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]

TASK [tispark : deploy spark-2.3.2-bin-hadoop2.7] *******************************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]

TASK [tispark : rename spark deploy dir] ****************************************************************************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]

TASK [tispark : deploy tispark] *************************************************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]

TASK [tispark : load customized spark_env] **************************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]

TASK [tispark : create spark_env.sh file] ***************************************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]

TASK [tispark : load customized spark_defaults] *********************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]

TASK [tispark : create spark_defaults.conf file] ********************************************************************************************************************
changed: [10.158.1.122]
changed: [10.158.1.98]

TASK [tispark : create spark-slave.sh] ******************************************************************************************************************************

PLAY [deploying tispark cluster] ************************************************************************************************************************************
skipping: no hosts matched

PLAY [deploying tidb-lightning] *************************************************************************************************************************************
skipping: no hosts matched

PLAY [deploying tikv-importer] **************************************************************************************************************************************
skipping: no hosts matched

PLAY [finalizing deployment target] *********************************************************************************************************************************

TASK [firewalld : All enabled ports] ********************************************************************************************************************************

TASK [firewalld : determine if firewalld is running] ****************************************************************************************************************

TASK [firewalld : enable firewalld ports] ***************************************************************************************************************************

TASK [firewalld : reload firewalld] *********************************************************************************************************************************

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=77   changed=34   unreachable=0    failed=0   
10.158.1.123               : ok=53   changed=23   unreachable=0    failed=0   
10.158.1.125               : ok=36   changed=16   unreachable=0    failed=0   
10.158.1.126               : ok=36   changed=16   unreachable=0    failed=0   
10.158.1.127               : ok=36   changed=16   unreachable=0    failed=0   
10.158.1.98                : ok=133  changed=67   unreachable=0    failed=0   
TiKV-1-1                   : ok=25   changed=10   unreachable=0    failed=0   
TiKV-1-2                   : ok=25   changed=10   unreachable=0    failed=0   
TiKV-1-3                   : ok=25   changed=10   unreachable=0    failed=0   
TiKV-2-1                   : ok=25   changed=10   unreachable=0    failed=0   
TiKV-2-2                   : ok=25   changed=10   unreachable=0    failed=0   
TiKV-2-3                   : ok=25   changed=10   unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   

Congrats! All goes well. :-)
[tidb@tidb-center tidb-ansible]$ 

这一阶段可能的报错:
1)NTP服务
如果TIDB的集群节点服务器上的NTP服务没有启动,就会报错

TASK [check_system_dynamic : Preflight check - Get NTP service status] **********************************************************************************************
ok: [10.158.1.123]
ok: [10.158.1.122]
ok: [10.158.1.126]
ok: [10.158.1.125]
ok: [10.158.1.98]
ok: [10.158.1.127]

TASK [check_system_dynamic : Preflight check - NTP service] *********************************************************************************************************
fatal: [10.158.1.98]: FAILED! => {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}
fatal: [10.158.1.122]: FAILED! => {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}
fatal: [10.158.1.123]: FAILED! => {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}
fatal: [10.158.1.125]: FAILED! => {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}
fatal: [10.158.1.126]: FAILED! => {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}
fatal: [10.158.1.127]: FAILED! => {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}

NO MORE HOSTS LEFT **************************************************************************************************************************************************
        to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/deploy.retry

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=3    changed=0    unreachable=0    failed=1   
10.158.1.123               : ok=3    changed=0    unreachable=0    failed=1   
10.158.1.125               : ok=3    changed=0    unreachable=0    failed=1   
10.158.1.126               : ok=3    changed=0    unreachable=0    failed=1   
10.158.1.127               : ok=3    changed=0    unreachable=0    failed=1   
10.158.1.98                : ok=4    changed=0    unreachable=0    failed=1   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[10.158.1.98]: Ansible FAILED! => playbook: deploy.yml; TASK: check_system_dynamic : Preflight check - NTP service; message: {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}

[10.158.1.122]: Ansible FAILED! => playbook: deploy.yml; TASK: check_system_dynamic : Preflight check - NTP service; message: {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}

[10.158.1.123]: Ansible FAILED! => playbook: deploy.yml; TASK: check_system_dynamic : Preflight check - NTP service; message: {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}

[10.158.1.125]: Ansible FAILED! => playbook: deploy.yml; TASK: check_system_dynamic : Preflight check - NTP service; message: {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}

[10.158.1.126]: Ansible FAILED! => playbook: deploy.yml; TASK: check_system_dynamic : Preflight check - NTP service; message: {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}

[10.158.1.127]: Ansible FAILED! => playbook: deploy.yml; TASK: check_system_dynamic : Preflight check - NTP service; message: {"changed": false, "msg": "Make sure NTP service is running and ntpstat is synchronised to NTP server. See https://github.com/pingcap/docs/blob/master/op-guide/ansible-deployment.md#how-to-check-whether-the-ntp-service-is-normal ."}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

这样,TiDB的集群软件就安装好了。

为了Grafana的Report的功能可以使用,推荐在Grafana的服务器上安装一下RPM
在我的环境中,Grafana位于:tidb-1(这个在配置文件中可以看到:inventory.ini)

[root@tidb-1 ~]# whoami
root
[root@tidb-1 ~]# 
[root@tidb-1 ~]# hostname
tidb-1
[root@tidb-1 ~]# 
[root@tidb-1 ~]# yum install fontconfig open-sans-fonts
Loaded plugins: langpacks, ulninfo
Package fontconfig-2.13.0-4.3.el7.x86_64 already installed and latest version
Package open-sans-fonts-1.10-1.el7.noarch already installed and latest version
Nothing to do
[root@tidb-1 ~]# 

3 ))))))))))))))) 运行

启动TiDB集群:

[tidb@tidb-center tidb-ansible]$ pwd
/home/tidb/tidb-ansible
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ls -ltr
total 208
-rw-rw-r--  1 tidb tidb  1147 Mar  3 11:55 README.md
-rw-rw-r--  1 tidb tidb 11294 Mar  3 11:55 LICENSE
-rw-rw-r--  1 tidb tidb   214 Mar  3 11:55 clean_log_cron.yml
-rw-rw-r--  1 tidb tidb  1231 Mar  3 11:55 bootstrap.yml
-rw-rw-r--  1 tidb tidb  1142 Mar  3 11:55 ansible.cfg
drwxrwxr-x  3 tidb tidb    25 Mar  3 11:55 cloud
-rw-rw-r--  1 tidb tidb  2265 Mar  3 11:55 collect_diagnosis.yml
drwxrwxr-x  2 tidb tidb  4096 Mar  3 11:55 common_tasks
-rw-rw-r--  1 tidb tidb   516 Mar  3 11:55 create_users.yml
-rw-rw-r--  1 tidb tidb 10346 Mar  3 11:55 graceful_stop.yml
-rw-rw-r--  1 tidb tidb  3222 Mar  3 11:55 deploy.yml
-rw-rw-r--  1 tidb tidb  2076 Mar  3 11:55 deploy_ntp.yml
-rw-rw-r--  1 tidb tidb  1238 Mar  3 11:55 deploy_drainer.yml
drwxrwxr-x  2 tidb tidb  4096 Mar  3 11:55 group_vars
-rw-rw-r--  1 tidb tidb   116 Mar  3 11:55 local_prepare.yml
drwxrwxr-x  2 tidb tidb    69 Mar  3 11:55 library
-rw-rw-r--  1 tidb tidb    52 Mar  3 11:55 requirements.txt
-rw-rw-r--  1 tidb tidb 11003 Mar  3 11:55 migrate_monitor.yml
-rw-rw-r--  1 tidb tidb 10540 Mar  3 11:55 rolling_update.yml
drwxrwxr-x 42 tidb tidb  4096 Mar  3 11:55 roles
-rw-rw-r--  1 tidb tidb 17333 Mar  3 11:55 rolling_update_monitor.yml
-rw-rw-r--  1 tidb tidb 15849 Mar  3 11:55 start.yml
-rw-rw-r--  1 tidb tidb  3633 Mar  3 11:55 unsafe_cleanup.yml
-rw-rw-r--  1 tidb tidb  2051 Mar  3 11:55 unsafe_cleanup_container.yml
drwxrwxr-x  2 tidb tidb    34 Mar  3 11:55 templates
-rw-rw-r--  1 tidb tidb  9297 Mar  3 11:55 stop.yml
-rw-rw-r--  1 tidb tidb   824 Mar  3 11:55 stop_spark.yml
-rw-rw-r--  1 tidb tidb  1543 Mar  3 11:55 stop_drainer.yml
-rw-rw-r--  1 tidb tidb   828 Mar  3 11:55 start_spark.yml
-rw-rw-r--  1 tidb tidb  1542 Mar  3 11:55 start_drainer.yml
-rw-rw-r--  1 tidb tidb  4184 Mar  3 11:55 unsafe_cleanup_data.yml
drwxrwxr-x  2 tidb tidb    37 Mar  3 12:12 callback_plugins
drwxrwxr-x  2 tidb tidb    59 Mar  3 12:12 log
drwxrwxr-x  2 tidb tidb    37 Mar  3 12:12 filter_plugins
-rw-rw-r--  1 tidb tidb   140 Mar  3 12:35 hosts.ini
-rw-rw-r--  1 tidb tidb  2794 Mar  3 13:59 inventory.ini
drwxr-xr-x  3 tidb tidb    17 Mar  3 14:01 resources
drwxrwxr-x  3 tidb tidb  4096 Mar  3 14:01 conf
drwxr-xr-x  2 tidb tidb  4096 Mar  3 14:06 downloads
drwxrwxr-x  2 tidb tidb  4096 Mar  3 14:08 fact_files
drwxrwxr-x  2 tidb tidb    73 Mar  3 14:10 retry_files
drwxrwxr-x  3 tidb tidb  4096 Mar  3 15:27 scripts
[tidb@tidb-center tidb-ansible]$ 

[tidb@tidb-center tidb-ansible]$ ssh tidb-1 date
Sun Mar  3 16:05:50 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-2 date
Sun Mar  3 16:05:54 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-3 date
Sun Mar  3 16:05:58 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-4 date
Sun Mar  3 16:06:01 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-5 date
Sun Mar  3 16:06:04 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh tidb-6 date
Sun Mar  3 16:06:07 CST 2019
[tidb@tidb-center tidb-ansible]$ 
[tidb@tidb-center tidb-ansible]$ ssh 10.158.1.98 date
Sun Mar  3 16:06:14 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh 10.158.1.122 date
Sun Mar  3 16:06:18 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh 10.158.1.123 date
Sun Mar  3 16:06:22 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh 10.158.1.125 date
Sun Mar  3 16:06:25 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh 10.158.1.126 date
Sun Mar  3 16:06:31 CST 2019
[tidb@tidb-center tidb-ansible]$ ssh 10.158.1.127 date
Sun Mar  3 16:06:34 CST 2019
[tidb@tidb-center tidb-ansible]$ 

[tidb@tidb-center tidb-ansible]$ ansible-playbook start.yml

PLAY [check config locally] *****************************************************************************************************************************************

TASK [check_config_static : Ensure only one monitoring host exists] *************************************************************************************************

TASK [check_config_static : Ensure monitored_servers exists] ********************************************************************************************************

TASK [check_config_static : Ensure TiDB host exists] ****************************************************************************************************************

TASK [check_config_static : Ensure PD host exists] ******************************************************************************************************************

TASK [check_config_static : Ensure TiKV host exists] ****************************************************************************************************************

TASK [check_config_static : Check ansible_user variable] ************************************************************************************************************

TASK [check_config_static : Ensure timezone variable is set] ********************************************************************************************************

TASK [check_config_static : Close old SSH control master processes] *************************************************************************************************
ok: [localhost]

PLAY [gather all facts, and check dest] *****************************************************************************************************************************

TASK [check_config_dynamic : Set enable_binlog variable] ************************************************************************************************************

TASK [check_config_dynamic : Set deploy_dir if not set] *************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]

TASK [check_config_dynamic : environment check (deploy dir)] ********************************************************************************************************
ok: [10.158.1.125]
ok: [10.158.1.98]
ok: [10.158.1.126]
ok: [10.158.1.122]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [10.158.1.123]
ok: [TiKV-1-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]
ok: [TiKV-2-2]

TASK [check_config_dynamic : Preflight check - Does deploy dir have appropriate permission] *************************************************************************

TASK [check_config_dynamic : environment check (supervise)] *********************************************************************************************************

TASK [check_config_dynamic : config skip variables (default)] *******************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [check_config_dynamic : config skip variables] *****************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [TiKV-1-1]
ok: [TiKV-2-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]
ok: [TiKV-1-3]
ok: [TiKV-2-3]

TASK [check_config_dynamic : config skip variables] *****************************************************************************************************************

PLAY [monitored_servers] ********************************************************************************************************************************************

TASK [start node_exporter by supervise] *****************************************************************************************************************************

TASK [start node_exporter by systemd] *******************************************************************************************************************************
changed: [10.158.1.125]
changed: [10.158.1.122]
changed: [10.158.1.98]
changed: [10.158.1.126]
changed: [10.158.1.127]
changed: [10.158.1.123]

TASK [wait until the node_exporter port is up] **********************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.125]
ok: [10.158.1.98]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [10.158.1.123]

TASK [wait until the node_exporter metrics page is available] *******************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [10.158.1.125]
ok: [10.158.1.98]
ok: [10.158.1.123]

PLAY [monitored_servers] ********************************************************************************************************************************************

TASK [start blackbox_exporter by supervise] *************************************************************************************************************************

TASK [start blackbox_exporter by systemd] ***************************************************************************************************************************
changed: [10.158.1.126]
changed: [10.158.1.125]
changed: [10.158.1.122]
changed: [10.158.1.98]
changed: [10.158.1.123]
changed: [10.158.1.127]

TASK [wait until the blackbox_exporter port is up] ******************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.125]
ok: [10.158.1.126]
ok: [10.158.1.127]
ok: [10.158.1.123]

TASK [wait until the blackbox_exporter metrics page is available] ***************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.125]
ok: [10.158.1.98]
ok: [10.158.1.127]
ok: [10.158.1.123]
ok: [10.158.1.126]

PLAY [alertmanager_servers] *****************************************************************************************************************************************
skipping: no hosts matched

PLAY [monitoring_servers] *******************************************************************************************************************************************

TASK [start pushgateway by supervise] *******************************************************************************************************************************

TASK [start pushgateway by systemd] *********************************************************************************************************************************
changed: [10.158.1.98] => (item=pushgateway-9091.service)

TASK [wait until the pushgateway port is up] ************************************************************************************************************************
ok: [10.158.1.98]

TASK [wait until the pushgateway metrics page is available] *********************************************************************************************************
ok: [10.158.1.98]

PLAY [monitoring_servers] *******************************************************************************************************************************************

TASK [start prometheus by supervise] ********************************************************************************************************************************

TASK [start prometheus by systemd] **********************************************************************************************************************************
changed: [10.158.1.98] => (item=prometheus-9090.service)

TASK [wait until the prometheus port is up] *************************************************************************************************************************
ok: [10.158.1.98]

TASK [wait until the prometheus metrics page is available] **********************************************************************************************************
ok: [10.158.1.98]

PLAY [kafka_exporter_servers] ***************************************************************************************************************************************
skipping: no hosts matched

PLAY [pd_servers] ***************************************************************************************************************************************************

TASK [start PD by supervise] ****************************************************************************************************************************************

TASK [start PD by systemd] ******************************************************************************************************************************************
changed: [10.158.1.98]
changed: [10.158.1.123]
changed: [10.158.1.122]

TASK [wait until the PD port is up] *********************************************************************************************************************************
ok: [10.158.1.122]
ok: [10.158.1.98]
ok: [10.158.1.123]

TASK [wait until the PD health page is available] *******************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]
ok: [10.158.1.123]

TASK [wait until the PD health page is available when enable_tls] ***************************************************************************************************

PLAY [tikv_servers] *************************************************************************************************************************************************

TASK [start TiKV by supervise] **************************************************************************************************************************************

TASK [start TiKV by systemd] ****************************************************************************************************************************************
changed: [TiKV-2-1]
changed: [TiKV-1-1]
changed: [TiKV-1-3]
changed: [TiKV-2-2]
changed: [TiKV-1-2]
changed: [TiKV-2-3]

TASK [wait until the TiKV port is up] *******************************************************************************************************************************
ok: [TiKV-1-3]
ok: [TiKV-2-3]
ok: [TiKV-2-1]
ok: [TiKV-1-1]
ok: [TiKV-1-2]
ok: [TiKV-2-2]

TASK [wait until the TiKV status page is available] *****************************************************************************************************************

TASK [wait until the TiKV status page is available when enable_tls] *************************************************************************************************

TASK [wait until TiKV process is up] ********************************************************************************************************************************
ok: [TiKV-1-3]
ok: [TiKV-2-3]
ok: [TiKV-2-2]
ok: [TiKV-1-2]
ok: [TiKV-1-1]
ok: [TiKV-2-1]

TASK [command] ******************************************************************************************************************************************************
ok: [TiKV-1-3]
ok: [TiKV-2-1]
ok: [TiKV-1-1]
ok: [TiKV-2-3]
ok: [TiKV-1-2]
ok: [TiKV-2-2]

TASK [display new tikv pid] *****************************************************************************************************************************************
ok: [TiKV-1-1] => {
    "msg": "tikv binary or docker pid: 17526"
}
ok: [TiKV-2-1] => {
    "msg": "tikv binary or docker pid: 17505"
}
ok: [TiKV-1-2] => {
    "msg": "tikv binary or docker pid: 12523"
}
ok: [TiKV-2-2] => {
    "msg": "tikv binary or docker pid: 12499"
}
ok: [TiKV-1-3] => {
    "msg": "tikv binary or docker pid: 17289"
}
ok: [TiKV-2-3] => {
    "msg": "tikv binary or docker pid: 17391"
}

PLAY [pump_servers] *************************************************************************************************************************************************
skipping: no hosts matched

PLAY [tidb_servers] *************************************************************************************************************************************************

TASK [start TiDB by supervise] **************************************************************************************************************************************

TASK [start TiDB by systemd] ****************************************************************************************************************************************
changed: [10.158.1.98]
changed: [10.158.1.122]

TASK [wait until the TiDB port is up] *******************************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]

TASK [wait until the TiDB status page is available] *****************************************************************************************************************
ok: [10.158.1.98]
ok: [10.158.1.122]

TASK [wait until the TiDB status page is available when enable_tls] *************************************************************************************************

PLAY [grafana_servers] **********************************************************************************************************************************************

TASK [common_dir : create deploy directories] ***********************************************************************************************************************

TASK [common_dir : create status directory] *************************************************************************************************************************

TASK [common_dir : create deploy binary directory] ******************************************************************************************************************

TASK [common_dir : create docker image directory] *******************************************************************************************************************

TASK [grafana : include_tasks] **************************************************************************************************************************************

TASK [start grafana by supervise] ***********************************************************************************************************************************

TASK [start grafana by systemd] *************************************************************************************************************************************
changed: [10.158.1.98]

TASK [wait until the grafana port is up] ****************************************************************************************************************************
ok: [10.158.1.98]

TASK [wait until the grafana login page is available] ***************************************************************************************************************
ok: [10.158.1.98]

TASK [start grafana_collector by supervise] *************************************************************************************************************************

TASK [start grafana_collector by systemd] ***************************************************************************************************************************
changed: [10.158.1.98]

TASK [wait until the grafana_collector port is up] ******************************************************************************************************************
ok: [10.158.1.98]

TASK [set_fact] *****************************************************************************************************************************************************
ok: [10.158.1.98]

TASK [include_tasks] ************************************************************************************************************************************************
included: /home/tidb/tidb-ansible/common_tasks/create_grafana_api_keys.yml for 10.158.1.98

TASK [Ensure grafana API Key directory exists] **********************************************************************************************************************
changed: [10.158.1.98 -> localhost]

TASK [Check grafana API Key list] ***********************************************************************************************************************************
ok: [10.158.1.98]

TASK [Check grafana API Key file existed] ***************************************************************************************************************************
ok: [10.158.1.98 -> localhost]

TASK [set_fact] *****************************************************************************************************************************************************

TASK [debug] ********************************************************************************************************************************************************

TASK [Delete grafana API Key when grafana API Key file is missing] **************************************************************************************************

TASK [Create grafana API Key] ***************************************************************************************************************************************
ok: [10.158.1.98] => (item={u'role': u'Admin', u'name': u'grafana_apikey'})

TASK [Create grafana API key file] **********************************************************************************************************************************
changed: [10.158.1.98 -> localhost] => (item={u'status': 200, u'content_length': u'126', u'cookies': {}, u'changed': False, '_ansible_no_log': False, 'item': {u'role': u'Admin', u'name': u'grafana_apikey'}, u'url': u'http://10.158.1.98:3000/api/auth/keys', '_ansible_item_result': True, u'json': {u'name': u'grafana_apikey', u'key': u'eyJrIjoiS2ZuN0d0cTNyazZpUlluNHhKb2ZMdzdzUUtFOFJQQ2MiLCJuIjoiZ3JhZmFuYV9hcGlrZXkiLCJpZCI6MX0='}, 'failed': False, u'connection': u'close', '_ansible_parsed': True, '_ansible_item_label': {u'role': u'Admin', u'name': u'grafana_apikey'}, u'content_type': u'application/json', u'invocation': {u'module_args': {u'directory_mode': None, u'force': False, u'remote_src': None, u'follow_redirects': u'safe', u'body_format': u'json', u'owner': None, u'follow': False, u'client_key': None, u'group': None, u'use_proxy': True, u'unsafe_writes': None, u'setype': None, u'content': None, u'serole': None, u'status_code': [200], u'return_content': False, u'method': u'POST', u'body': u'{"role": "Admin", "name": "grafana_apikey"}', u'url_username': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'src': None, u'dest': None, u'selevel': None, u'force_basic_auth': True, u'removes': None, u'http_agent': u'ansible-httpget', u'user': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'regexp': None, u'password': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'url_password': u'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', u'url': u'http://10.158.1.98:3000/api/auth/keys', u'validate_certs': True, u'seuser': None, u'client_cert': None, u'creates': None, u'headers': {u'Content-Type': u'application/json', u'Authorization': u'Basic YWRtaW46YWRtaW4='}, u'delimiter': None, u'mode': None, u'timeout': 30, u'attributes': None, u'backup': None}}, u'date': u'Sun, 03 Mar 2019 08:13:58 GMT', u'redirected': False, u'cookies_string': u'', '_ansible_ignore_errors': None, u'msg': u'OK (126 bytes)'})

TASK [import grafana data source] ***********************************************************************************************************************************
changed: [10.158.1.98]

TASK [import grafana dashboards - prepare config] *******************************************************************************************************************
changed: [10.158.1.98 -> localhost]

TASK [import grafana dashboards - run import script] ****************************************************************************************************************
changed: [10.158.1.98 -> localhost]

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=16   changed=4    unreachable=0    failed=0   
10.158.1.123               : ok=13   changed=3    unreachable=0    failed=0   
10.158.1.125               : ok=10   changed=2    unreachable=0    failed=0   
10.158.1.126               : ok=10   changed=2    unreachable=0    failed=0   
10.158.1.127               : ok=10   changed=2    unreachable=0    failed=0   
10.158.1.98                : ok=37   changed=13   unreachable=0    failed=0   
TiKV-1-1                   : ok=8    changed=1    unreachable=0    failed=0   
TiKV-1-2                   : ok=8    changed=1    unreachable=0    failed=0   
TiKV-1-3                   : ok=8    changed=1    unreachable=0    failed=0   
TiKV-2-1                   : ok=8    changed=1    unreachable=0    failed=0   
TiKV-2-2                   : ok=8    changed=1    unreachable=0    failed=0   
TiKV-2-3                   : ok=8    changed=1    unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   

Congrats! All goes well. :-)
[tidb@tidb-center tidb-ansible]$ 

这个过程也可能失败:

TASK [check_config_dynamic : environment check (deploy dir)] ********************************************************************************************************
ok: [10.158.1.125]
ok: [10.158.1.122]
ok: [10.158.1.98]
ok: [TiKV-2-1]
ok: [TiKV-1-1]
ok: [10.158.1.127]
fatal: [10.158.1.126]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection timed out during banner exchange", "unreachable": true}
ok: [TiKV-1-3]
ok: [TiKV-2-3]
ok: [TiKV-2-2]
ok: [TiKV-1-2]
ok: [10.158.1.123]

NO MORE HOSTS LEFT **************************************************************************************************************************************************
        to retry, use: --limit @/home/tidb/tidb-ansible/retry_files/start.retry

PLAY RECAP **********************************************************************************************************************************************************
10.158.1.122               : ok=2    changed=0    unreachable=0    failed=0   
10.158.1.123               : ok=2    changed=0    unreachable=0    failed=0   
10.158.1.125               : ok=2    changed=0    unreachable=0    failed=0   
10.158.1.126               : ok=1    changed=0    unreachable=1    failed=0   
10.158.1.127               : ok=2    changed=0    unreachable=0    failed=0   
10.158.1.98                : ok=2    changed=0    unreachable=0    failed=0   
TiKV-1-1                   : ok=1    changed=0    unreachable=0    failed=0   
TiKV-1-2                   : ok=1    changed=0    unreachable=0    failed=0   
TiKV-1-3                   : ok=1    changed=0    unreachable=0    failed=0   
TiKV-2-1                   : ok=1    changed=0    unreachable=0    failed=0   
TiKV-2-2                   : ok=1    changed=0    unreachable=0    failed=0   
TiKV-2-3                   : ok=1    changed=0    unreachable=0    failed=0   
localhost                  : ok=1    changed=0    unreachable=0    failed=0   


ERROR MESSAGE SUMMARY ***********************************************************************************************************************************************
[10.158.1.126]: Ansible UNREACHABLE! => playbook: start.yml; TASK: check_config_dynamic : environment check (deploy dir); message: {"changed": false, "msg": "Failed to connect to the host via ssh: Connection timed out during banner exchange", "unreachable": true}

Ask for help:
Contact us: support@pingcap.com
It seems that you encounter some problems. You can send an email to the above email address, attached with the tidb-ansible/inventory.ini and tidb-ansible/log/ansible.log files and the error message, or new issue on https://github.com/pingcap/tidb-ansible/issues. We'll try our best to help you deploy a TiDB cluster. Thanks. :-)
[tidb@tidb-center tidb-ansible]$ 

解法是:
先执行【ssh】测试一下。

所以,在前面,我会先将所有的IP与主机名都放到SSH里面测一下,然后运行【start.yml】

如上,如果没有问题,你的TiDB的集群就启动好了。

4 ))))))))))))))) 使用

启动好了之后,先去看看TiDB的端口情况

tidb-1

[root@tidb-1 ~]# netstat -tupln | grep 3000
tcp6       0      0 :::3000                 :::*                    LISTEN      26381/opt/grafana/b 
[root@tidb-1 ~]# netstat -tupln | grep 4000
tcp6       0      0 :::4000                 :::*                    LISTEN      26079/bin/tidb-serv 
[root@tidb-1 ~]# 

tidb-2

[root@tidb-2 ~]# netstat -tupln | grep 3000
[root@tidb-2 ~]# netstat -tupln | grep 4000
tcp6       0      0 :::4000                 :::*                    LISTEN      26338/bin/tidb-serv 
[root@tidb-2 ~]# 

如上,其中:
4000,是TiDB默认的服务器端口。
类似,MySQL的3306

3000,是Grafana的监听端口。

1 - 访问数据库
在中心服务器用MySQL的客户端访问一下:

[root@tidb-center ~]# mysql -u root -h tidb-1 -P 4000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 18
Server version: 5.7.10-TiDB-v2.1.5-1-g0bd6b1b MySQL Community Server (Apache License 2.0)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
4 rows in set (0.00 sec)

MySQL [(none)]> 

MS Windows,用数据库客户端程序访问一下:

1 - 访问WEB:
http://10.158.1.98:3000/login
默认口令:admin / admin

如上,通过Grafana,我们可以很直观的掌握TiDB群集的运行状态。

5 ))))))))))))))) 总结

最后,总结一下

TiDB:端口

TiDB	tidb_port	4000	应用及 DBA 工具访问通信端口
TiDB	tidb_status_port	10080	TiDB 状态信息上报通信端口
TiKV	tikv_port	20160	TiKV 通信端口
PD	pd_client_port	2379	提供 TiDB 和 PD 通信端口
PD	pd_peer_port	2380	PD 集群节点间通信端口
Pump	pump_port	8250	Pump 通信端口
Prometheus	prometheus_port	9090	Prometheus 服务通信端口
Pushgateway	pushgateway_port	9091	TiDB, TiKV, PD 监控聚合和上报端口
Node_exporter	node_exporter_port	9100	TiDB 集群每个节点的系统信息上报通信端口
Blackbox_exporter	blackbox_exporter_port	9115	Blackbox_exporter 通信端口,用于 TiDB 集群端口监控
Grafana	grafana_port	3000	Web 监控服务对外服务和客户端(浏览器)访问端口
Grafana	grafana_collector_port	8686	grafana_collector 通信端口,用于将 Dashboard 导出为 PDF 格式
Kafka_exporter	kafka_exporter_port	9308	Kafka_exporter 通信端口,用于监控 binlog Kafka 集群

TiDB:目录

全局	deploy_dir	/home/tidb/deploy	部署目录
TiDB	tidb_log_dir	{{ deploy_dir }}/log	日志目录
TiKV	tikv_log_dir	{{ deploy_dir }}/log	日志目录
TiKV	tikv_data_dir	{{ deploy_dir }}/data	数据目录
TiKV	wal_dir	””	rocksdb write-ahead 日志目录,为空时与 TiKV 数据目录一致
TiKV	raftdb_path	””	raftdb 目录,为空时为 tikv_data_dir/raft
PD	pd_log_dir	{{ deploy_dir }}/log	日志目录
PD	pd_data_dir	{{ deploy_dir }}/data.pd	数据目录
pump	pump_log_dir	{{ deploy_dir }}/log	日志目录
pump	pump_data_dir	{{ deploy_dir }}/data.pump	数据目录
prometheus	prometheus_log_dir	{{ deploy_dir }}/log	日志目录
prometheus	prometheus_data_dir	{{ deploy_dir }}/data.metrics	数据目录
pushgateway	pushgateway_log_dir	{{ deploy_dir }}/log	日志目录
node_exporter	node_exporter_log_dir	{{ deploy_dir }}/log	日志目录
grafana	grafana_log_dir	{{ deploy_dir }}/log	日志目录
grafana	grafana_data_dir	{{ deploy_dir }}/data.grafana	数据目录

6 ))))))))))))))) 终

终了。

——————————————————————————————————
Done。

说点什么

avatar

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据

  Subscribe  
提醒
隐藏
变装