ZFS(Zettabyte File System)

概述

ZFS文件系统的英文名称为Zettabyte File System,也叫动态文件系统(Dynamic File System),是第一个128位文件系统。
最初是由Sun公司为Solaris 10开发的文件系统,作为OpenSolaris开源计划的一部分;
ZFS于2005年11月发布,被Sun称为是终极文件系统;
经历了 10 年的活跃开发,而最新的开发将全面开放,并重新命名为:OpenZFS;

OpenZFS的Github:https://github.com/openzfs/zfs/

官方网站:
https://zfsonlinux.org/

最新版本为:2.2.1,发布时间:Sep 16 2021

ZFS体系结构如下:

在过去,文件系统都是构建在物理设备之上的;
为了管理这些物理设备,并未数据提供冗余,【卷管理】提供了一个单设备的镜像,但是这种设计增加了复杂性,同时根本没法使文件系统向更高层次发展,因此,文件系统不能跨越数据的物理位置;

这样的文件系统包括:LVM

ZFS完全抛弃了【卷管理】,不再创建虚拟的卷,而是把所有设备集中到一个存储池中进行管理;
存储池描述了存储的物理特征(设备的布局、数据的冗余等等),并扮演了一个能够创建文件系统的专门存储空间;

从此,文件系统不再局限于单独的物理设备,而且文件系统还允许物理设备把它们自带的那些文件系统共享到这个【池】中,你也不再需要预先规划好文件系统的大小,因为文件系统可以在【池】空间内自动增大;
当增加新的存储介质的时候,所有【池】中的所有文件系统都可以立即使用新增的空间,不需要额外进行操作;
在很多情况下,存储池也扮演了一个虚拟内存的角色;

ZFS利用了一种写时拷贝的事务模型技术,似的ZFS的快照和事务功能的实现变得更简单和自然;
快照功能的优点是灵活,缺点是写时拷贝(COW)会导致碎片化问题严重,对于顺序写生成的大文件,如果以后随机的对其中一部分进行了修改,那么改文件在磁盘上的物理地址就变得不再连续,未来的顺序读性能会较差;

存储池

不同于传统文件系统需要驻留于单独的设备或者需要一个卷管理系统去使用一个以上的设备;
ZFS是创建在虚拟的,被称为【zpools】的存储池之上的,存储池最早在AdvFS实现,并且加到后来的Btrfs;

存储池由若干个虚拟设备(Virtual devices, vdevs)组成;
这些虚拟设备可以是原始磁盘,也可以是一个RAID1镜像设备,或者非标准RAID等级的多磁盘组;
于是zpool上的文件系统可以使用这些虚拟设备的总存储容量;

可以使用磁盘限额,以及设置磁盘预留空间来限制存储池中单个文件系统所占用的空间。

容量

ZFS是一个128位的文件系统,意味着,它可以存储:1800亿亿倍于当前64位文件系统的数据(18.4 x 10)

ZFS项目的开发者Bonwick曾说:

Populating 128-bit file systems would exceed the quantum limits of earth-based storage.
You couldn't fill a 128-bit storage pool without boiling the oceans.
——————————
要填满一个128位的文件系统,将耗尽地球上所有存储设备
除非你拥有煮沸整个海洋的能量,不然你不可能将其填满

ZFS理论极限:

  • 最大单个文件大小为 16 EB(1 EB = 1024 PB)
  • 最大 256 千万亿(256*1015 )的 ZB(1 ZB = 1024 EB)的存储

如何安装ZFS

推荐安装:需要安装FreeBSD或者使用illumos内核的操作系统
illumos是OpenSolaris内核的一个克隆版本

如果你想在Linux上使用ZFS,那么智能在存储文件系统上使用;
因为没有任何Linux发行版可以在根目录上安装ZFS;
所以,如果想在Linux上尝试ZFS,那么需要访问:【ZFS on Linux】项目

ZFS主要有两个工具:

  • ZPOOL
    • 处理使用磁盘实用程序创建和维护ZFS池
  • ZFS
    • 负责数据的创建和维护

Oracle官方的ZFS文档:https://docs.oracle.com/cd/E24847_01/html/819-7065/gbscy.html#scrolltoc


Install ZFS on CentOS 7.x

ZFS:安装

首先,从官方获取安装介质(源码包):
https://github.com/openzfs/zfs/releases/download/zfs-2.1.1/zfs-2.1.1.tar.gz

解压:tar -xzf xxx.tar.gz

在源码包安装前,首先需要安装依赖的RPM包:

yum install -y libuuid-devel
yum install -y libblkid-devel
yum install -y openssl
yum install openssl-devel

创建目录:mkdir /opt/zfs

源码包安装:

./configure --prefix=/opt/zfs
make
make install

[root@oracle1 zfs-2.1.1]# ls -ltr /opt/zfs/
total 0
[root@oracle1 zfs-2.1.1]# 
[root@oracle1 zfs-2.1.1]# make install

(过多的输出,...)

make[4]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs/hooks'
make[3]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs/hooks'
Making install in scripts
make[3]: Entering directory `/software/zfs-2.1.1/contrib/initramfs/scripts'
Making install in local-top
make[4]: Entering directory `/software/zfs-2.1.1/contrib/initramfs/scripts/local-top'
make[5]: Entering directory `/software/zfs-2.1.1/contrib/initramfs/scripts/local-top'
make[5]: Nothing to be done for `install-exec-am'.
 /usr/bin/mkdir -p '/usr/share/initramfs-tools/scripts/local-top'
 /usr/bin/install -c zfs '/usr/share/initramfs-tools/scripts/local-top'
make[5]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs/scripts/local-top'
make[4]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs/scripts/local-top'
make[4]: Entering directory `/software/zfs-2.1.1/contrib/initramfs/scripts'
make[5]: Entering directory `/software/zfs-2.1.1/contrib/initramfs/scripts'
make[5]: Nothing to be done for `install-exec-am'.
 /usr/bin/mkdir -p '/usr/share/initramfs-tools/scripts'
 /usr/bin/install -c zfs '/usr/share/initramfs-tools/scripts'
make[5]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs/scripts'
make[4]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs/scripts'
make[3]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs/scripts'
make[3]: Entering directory `/software/zfs-2.1.1/contrib/initramfs'
make[4]: Entering directory `/software/zfs-2.1.1/contrib/initramfs'
make[4]: Nothing to be done for `install-exec-am'.
 /usr/bin/mkdir -p '/usr/share/initramfs-tools'
 /usr/bin/install -c zfsunlock '/usr/share/initramfs-tools'
make[4]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs'
make[3]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs'
make[2]: Leaving directory `/software/zfs-2.1.1/contrib/initramfs'
make[2]: Entering directory `/software/zfs-2.1.1/contrib'
make[3]: Entering directory `/software/zfs-2.1.1/contrib'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/software/zfs-2.1.1/contrib'
make[2]: Leaving directory `/software/zfs-2.1.1/contrib'
make[1]: Leaving directory `/software/zfs-2.1.1/contrib'
Making install in udev
make[1]: Entering directory `/software/zfs-2.1.1/udev'
Making install in rules.d
make[2]: Entering directory `/software/zfs-2.1.1/udev/rules.d'
make[3]: Entering directory `/software/zfs-2.1.1/udev/rules.d'
make[3]: Nothing to be done for `install-exec-am'.
 /usr/bin/mkdir -p '/lib/udev/rules.d'
 /usr/bin/install -c -m 644 69-vdev.rules 60-zvol.rules 90-zfs.rules '/lib/udev/rules.d'
make[3]: Leaving directory `/software/zfs-2.1.1/udev/rules.d'
make[2]: Leaving directory `/software/zfs-2.1.1/udev/rules.d'
make[2]: Entering directory `/software/zfs-2.1.1/udev'
make[3]: Entering directory `/software/zfs-2.1.1/udev'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/software/zfs-2.1.1/udev'
make[2]: Leaving directory `/software/zfs-2.1.1/udev'
make[1]: Leaving directory `/software/zfs-2.1.1/udev'
Making install in module
make[1]: Entering directory `/software/zfs-2.1.1/module'
make -C /usr/src/kernels/3.10.0-957.el7.x86_64 M=`pwd` modules_install \
	INSTALL_MOD_PATH= \
	INSTALL_MOD_DIR=extra \
	KERNELRELEASE=3.10.0-957.el7.x86_64
make[2]: Entering directory `/usr/src/kernels/3.10.0-957.el7.x86_64'
  INSTALL /software/zfs-2.1.1/module/avl/zavl.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/icp/icp.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/lua/zlua.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/nvpair/znvpair.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/spl/spl.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/unicode/zunicode.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/zcommon/zcommon.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/zfs/zfs.ko
Can't read private key
  INSTALL /software/zfs-2.1.1/module/zstd/zzstd.ko
Can't read private key
  DEPMOD  3.10.0-957.el7.x86_64
make[2]: Leaving directory `/usr/src/kernels/3.10.0-957.el7.x86_64'
kmoddir=/lib/modules/3.10.0-957.el7.x86_64; \
if [ -n "" ]; then \
	find $kmoddir -name 'modules.*' | xargs rm -f; \
fi
sysmap=/boot/System.map-3.10.0-957.el7.x86_64; \
if [ -f $sysmap ]; then \
	depmod -ae -F $sysmap 3.10.0-957.el7.x86_64; \
fi
make[1]: Leaving directory `/software/zfs-2.1.1/module'
make[1]: Entering directory `/software/zfs-2.1.1'
[ -x ./scripts/zfs-tests.sh ] && \
    ./scripts/zfs-tests.sh -c
make[2]: Entering directory `/software/zfs-2.1.1'
make[2]: Nothing to be done for `install-exec-am'.
 /usr/bin/mkdir -p '/opt/zfs/src/zfs-2.1.1'
 /usr/bin/install -c -m 644 zfs.release.in zfs_config.h.in '/opt/zfs/src/zfs-2.1.1'
 /usr/bin/mkdir -p '/opt/zfs/src/zfs-2.1.1/3.10.0-957.el7.x86_64'
 /usr/bin/install -c -m 644 zfs.release zfs_config.h module/Module.symvers '/opt/zfs/src/zfs-2.1.1/3.10.0-957.el7.x86_64'
make  install-data-hook
make[3]: Entering directory `/software/zfs-2.1.1'
rm -rf /opt/zfs/src/spl-2.1.1 && \
mkdir /opt/zfs/src/spl-2.1.1 && \
cd /opt/zfs/src/spl-2.1.1 && \
ln -s ../zfs-2.1.1/include/spl include && \
ln -s ../zfs-2.1.1/3.10.0-957.el7.x86_64 3.10.0-957.el7.x86_64 && \
ln -s ../zfs-2.1.1/zfs_config.h.in spl_config.h.in && \
ln -s ../zfs-2.1.1/zfs.release.in spl.release.in && \
cd /opt/zfs/src/zfs-2.1.1/3.10.0-957.el7.x86_64 && \
ln -fs zfs_config.h spl_config.h && \
ln -fs zfs.release spl.release
make[3]: Leaving directory `/software/zfs-2.1.1'
make[2]: Leaving directory `/software/zfs-2.1.1'
make[1]: Leaving directory `/software/zfs-2.1.1'
[root@oracle1 zfs-2.1.1]# 
[root@oracle1 zfs-2.1.1]# ls -ltr /opt/zfs/
total 4
drwxr-xr-x 4 root root   28 Sep 17 23:42 share
drwxr-xr-x 4 root root   34 Sep 17 23:42 include
drwxr-xr-x 3 root root 4096 Sep 17 23:42 lib
drwxr-xr-x 3 root root   17 Sep 17 23:42 libexec
drwxr-xr-x 2 root root  184 Sep 17 23:42 sbin
drwxr-xr-x 2 root root   91 Sep 17 23:42 bin
drwxr-xr-x 5 root root   48 Sep 17 23:42 etc
drwxr-xr-x 4 root root   40 Sep 17 23:43 src
[root@oracle1 zfs-2.1.1]#
[root@oracle1 zfs-2.1.1]# ls -ltr /opt/zfs/bin/
total 216
-rwxr-xr-x 1 root root 126352 Sep 17 23:42 raidz_test
-rwxr-xr-x 1 root root  16851 Sep 17 23:42 arcstat
-rwxr-xr-x 1 root root  42356 Sep 17 23:42 arc_summary
-rwxr-xr-x 1 root root  22448 Sep 17 23:42 dbufstat
-rwxr-xr-x 1 root root   2551 Sep 17 23:42 zvol_wait
[root@oracle1 zfs-2.1.1]# 

配置环境变量:

[root@oracle1 zfs-2.1.1]# vi ~/.bash_profile 
[root@oracle1 zfs-2.1.1]# 
[root@oracle1 zfs-2.1.1]# cat ~/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin:/opt/zfs/bin:/opt/zfs/sbin

export PATH
[root@oracle1 zfs-2.1.1]# 
[root@oracle1 zfs-2.1.1]# whereis zfs
zfs: /opt/zfs/sbin/zfs
[root@oracle1 zfs-2.1.1]#

Linux加载ZFS模块(modules)

[root@oracle1 zfs-2.1.1]# /sbin/modprobe zfs
[root@oracle1 zfs-2.1.1]# 

--- 写入开机脚本
[root@oracle1 ~]# cat /etc/rc.local 
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

# Disable Transparent Hugepage
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# Disable Transparent Hugepage
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# Disable Transparent Hugepage
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# ZFS
/sbin/modprobe zfs
[root@oracle1 ~]# 

ZFS:存储池:创建一个池驱动器

给Linux加两块磁盘:

--- 加盘前
[root@oracle1 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  300M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 17.7G  0 part /
sr0     11:0    1 1024M  0 rom  
[root@oracle1 ~]# 

--- 加盘后
[root@oracle1 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  300M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 17.7G  0 part /
sdb      8:16   0   10G  0 disk 
sdc      8:32   0   10G  0 disk 
sr0     11:0    1 1024M  0 rom  
[root@oracle1 ~]# 

可以看到,新增的磁盘设备为:sdb / sdc

创建一个池驱动器:

[root@oracle1 ~]# zpool list
no pools available
[root@oracle1 ~]# 
[root@oracle1 ~]# zpool create datapool_1 sdb
[root@oracle1 ~]# 
[root@oracle1 ~]# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
datapool_1  9.50G    99K  9.50G        -         -     0%     0%  1.00x    ONLINE  -
[root@oracle1 ~]# 

--- 查看池状态
[root@oracle1 ~]# zpool status
  pool: datapool_1
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	datapool_1  ONLINE       0     0     0
	  sdb       ONLINE       0     0     0
	  sdc       ONLINE       0     0     0

errors: No known data errors
[root@oracle1 ~]#

--- 查看挂载点状态
[root@oracle1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
datapool_1      9.3G  128K  9.3G   1% /datapool_1
[root@oracle1 ~]#

可以看到,创建好了ZPOOL后,就自动挂载了;

因为,上面的命令虽然可以自动挂载,但是并没有写入开机启动:

[root@oracle1 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Sun Sep  5 17:33:32 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=6321ab43-94c1-4cf3-ae86-69b5c3f44b16 /                       xfs     defaults        0 0
UUID=9bdb71de-8351-4ead-bae1-dc335dd3e963 /boot                   xfs     defaults        0 0
UUID=7c88e8ce-020e-4235-bf4e-866289190cc5 swap                    swap    defaults        0 0
[root@oracle1 ~]# 

所以,开机后,ZFS的挂载点都会掉。

ZFS:存储池:扩容

先查看下当前的容量情况:

[root@oracle1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
datapool_1      9.3G  128K  9.3G   1% /datapool_1
[root@oracle1 ~]# 
[root@oracle1 ~]# df -h | grep 'datapool_1'
datapool_1      9.3G  128K  9.3G   1% /datapool_1
[root@oracle1 ~]# 
[root@oracle1 ~]# mount | grep 'datapool_1'
datapool_1 on /datapool_1 type zfs (rw,xattr,noacl)
[root@oracle1 ~]# 

注意,当前只有9.3G

扩容:

[root@oracle1 ~]# zpool add datapool_1 sdb
/dev/sdb is in use and contains a unknown filesystem.
[root@oracle1 ~]# 
[root@oracle1 ~]# df -h | grep 'datapool_1'
datapool_1      9.3G  128K  9.3G   1% /datapool_1
[root@oracle1 ~]# 
[root@oracle1 ~]# zpool add datapool_1 sdc
[root@oracle1 ~]# 
[root@oracle1 ~]# df -h | grep 'datapool_1'
datapool_1       19G  128K   19G   1% /datapool_1
[root@oracle1 ~]# 

如上,如果你要添加一个已经被ZPOOL使用的磁盘设备,则会报错;
另外,可以看到,添加成功后,立即就看到了空间的变化

ZFS的使用:ZFS + NFS

ZFS一般会与NFS配合使用,用NFS,将ZFS的目录分享出去

NFS的安装:

yum install –y nfs-utils rpcbind
service rpcbind start
service nfs start
service nfslock start
chkconfig rpcbind on
chkconfig nfs on
chkconfig nfslock on

ZFS:开启NFS共享

zfs sharenfs=on <zpool_name>

--- 例如
[root@oracle1 ~]# zfs sharenfs=on datapool_1
[root@oracle1 ~]# 

配置共享:

[root@oracle1 ~]# cat /etc/exports
/datapool_1	*(rw,no_root_squash)
[root@oracle1 ~]#

启用共享:

[root@oracle1 ~]# zfs share -a
[root@oracle1 ~]# 

查看共享:

[root@oracle1 ~]# showmount -e localhost
Export list for localhost:
/datapool_1 *
[root@oracle1 ~]# 

查看监听端口:

[root@oracle1 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  22330  status
    100024    1   tcp  32682  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  49735  nlockmgr
    100021    3   udp  49735  nlockmgr
    100021    4   udp  49735  nlockmgr
    100021    1   tcp  19237  nlockmgr
    100021    3   tcp  19237  nlockmgr
    100021    4   tcp  19237  nlockmgr
[root@oracle1 ~]# 

ZFS:快照

快照是在某个时间点的ZFS文件系统的只读副本,它们不在ZFS池消耗任何额外的空间;

创建快照:

zfs snapshot <pool_name>@<snapshot_name>

--- 例如
[root@oracle1 ~]# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
datapool_1    19G   166K  19.0G        -         -     0%     0%  1.00x    ONLINE  -
[root@oracle1 ~]# 
[root@oracle1 ~]# zfs snapshot datapool_2@snapshot_01
cannot open 'datapool_2': dataset does not exist
usage:
	snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
[root@oracle1 ~]# 
[root@oracle1 ~]# zfs snapshot datapool_1@snapshot_01
[root@oracle1 ~]# 

可以看到,如果你指定的存储池不存在,则会报错:【dataset does not exist】

ZFS:容量限制

通过设置QUOTA可以限制容量:

zfs set quota=20T <pool_name>/<filesystem_name>

ZFS:存储池:迁移

导出:

[root@oracle1 ~]# mkdir zfs_export_data
[root@oracle1 ~]# cd zfs_export_data/
[root@oracle1 zfs_export_data]# ls -ltr
total 0
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# zpool export datapool_1
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# ls -ltr
total 0
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# zpool status
no pools available
[root@oracle1 zfs_export_data]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
[root@oracle1 zfs_export_data]# 

可以看到,并没有生成导出文件;
并且,导出后,zpool查不到导出的池了,在系统中的挂载点也卸载了;

导入:

[root@oracle1 zfs_export_data]# zpool import
   pool: datapool_1
     id: 13918607751309232348
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	datapool_1  ONLINE
	  sdb       ONLINE
	  sdc       ONLINE
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# zpool status
no pools available
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
[root@oracle1 zfs_export_data]#
[root@oracle1 zfs_export_data]# zpool import datapool_1
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# zpool status
  pool: datapool_1
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	datapool_1  ONLINE       0     0     0
	  sdb       ONLINE       0     0     0
	  sdc       ONLINE       0     0     0

errors: No known data errors
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
datapool_1       19G  128K   19G   1% /datapool_1
[root@oracle1 zfs_export_data]# 

可以看到,也会自动挂载;

如果要取消ZFS的挂载可以这么做:

[root@oracle1 zfs_export_data]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
datapool_1       19G  128K   19G   1% /datapool_1
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# zfs umount -a
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
[root@oracle1 zfs_export_data]# 

重新挂载ZFS:

[root@oracle1 zfs_export_data]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# zfs mount -a
[root@oracle1 zfs_export_data]# 
[root@oracle1 zfs_export_data]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        18G   17G  1.3G  93% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   12M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1       297M  129M  169M  44% /boot
tmpfs           378M     0  378M   0% /run/user/0
datapool_1       19G  128K   19G   1% /datapool_1
[root@oracle1 zfs_export_data]# 

ZFS:存储池:销毁

语法:

zpool destroy <zpool_name>

ZFS:文件系统

创建文件系统:

[root@oracle1 ~]# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
datapool_1   212K  18.4G       24K  /datapool_1
[root@oracle1 ~]# 
[root@oracle1 ~]# zfs create datapool_1/alice_wonderland
[root@oracle1 ~]# 
[root@oracle1 ~]# zfs list
NAME                          USED  AVAIL     REFER  MOUNTPOINT
datapool_1                    268K  18.4G       24K  /datapool_1
datapool_1/alice_wonderland    24K  18.4G       24K  /datapool_1/alice_wonderland
[root@oracle1 ~]# 
[root@oracle1 ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/sda3                     18G   17G  1.3G  93% /
devtmpfs                     1.9G     0  1.9G   0% /dev
tmpfs                        1.9G     0  1.9G   0% /dev/shm
tmpfs                        1.9G   12M  1.9G   1% /run
tmpfs                        1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                    297M  129M  169M  44% /boot
tmpfs                        378M     0  378M   0% /run/user/0
datapool_1                    19G  128K   19G   1% /datapool_1
datapool_1/alice_wonderland   19G  128K   19G   1% /datapool_1/alice_wonderland
[root@oracle1 ~]# 
[root@oracle1 ~]# mount | grep datapool
datapool_1 on /datapool_1 type zfs (rw,xattr,noacl)
datapool_1/alice_wonderland on /datapool_1/alice_wonderland type zfs (rw,xattr,noacl)
[root@oracle1 ~]# 

文件系统的销毁:

zfs destory <pool_name>/<filesystem_name>

以上内容,对ZFS做了简要的介绍,更多的详细信息可以参阅官方文档;


ZFS / 常用命令

创建ZPOOL
zpool create <pool_name> /dev/sdb /dev/sdc /dev/sdd 

查看快照与克隆的对应关系:
[root@oracle1 ~]# zfs get origin
NAME                         PROPERTY  VALUE   SOURCE
datapool_1                   origin    -       -
datapool_1@snapshot_01       origin    -       -
datapool_1/alice_wonderland  origin    -       -
[root@oracle1 ~]# 

查看快照
[root@oracle1 ~]# zfs list -t snapshot
NAME                     USED  AVAIL     REFER  MOUNTPOINT
datapool_1@snapshot_01    22K      -       24K  -
[root@oracle1 ~]# 

创建快照
zfs snapshot <pool_name>/<fs_name>@<snapshot_name>

克隆卷
zfs clone <pool_name>/<fs_name>@<snapshot_name> <pool_name>/<fs_name>

挂载卷
zfs mount <pool_name>/<fs_name>

删除快照和克隆卷
zfs destory -R -f <pool_name>/<fs_name>@<snapshot_name>

修改目录Quota
zfs set quota=50G <pool_name>/<fs_name>

跨主机传输snapshot
zfs send <pool_name>/<fs_name>@<snapshot_name> | ssh user@ip "zfs receive <pool_name>/<fs_name>"

添加热备盘
zpool status -v
zpool add <pool_name> spare <device_name>

创建文件系统
zfs create <pool_name>/<fs_name>
zfs set mountpoint=/<mount_point> <pool_name>/<fs_name>

ZPOOL修改名称
zfs export <pool_name>
zfs import <pool_name> <pool_name_new>

启用压缩
zfs set compression=lz4 <pool_name>/<fs_name>

启用去重
zfs set dedup=on <pool_name>/<fs_name>

如果ZFS夯住:
1. 重启服务器
2. 服务器重启好了以后,export / import ZFS资源池

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

隐藏
变装