______________________________________
http://www.chinaunix.net/old_jh/6/1284850.html
作者:sunshiene 发表于:2009-02-13 14:05:13
由于公司的新项目要上ZFS了,所以这几天抽空学习了一下ZFS,自己也有空就把这些东西整理了一下。当作ZFS的扫盲贴,其实操作都很简单自己练一编都会。我也是按照docs.sun.com上的《Solaris 10 ZFS管理指南》来练习的。千万不要把我的文档当教材,仅供大家参考。
http://docs.sun.com/app/docs/doc/819-7065?l=zh&a=load
两天下来觉得ZFS还是非常的便捷,命令也非常好记。ZFS的优点就不必多说了,用的人都能体会到ZFS和UFS是完全两个概念的东西。唯一不确定的只有稳定性和可靠性需要在实践中来检验。
10月底10/08的Solaris 10也将要提供下载,加上经济危机,相信有更多的用户会采用免费的ZFS。希望ZFS能够在挑战中壮大。
Note:本人不擅长理论描述,所以一些理论性的东西请大家到别处寻找,呵呵。
[color=Blue]完稿! # date
Wednesday, October 15, 2008 10:33:37 AM CST[/color]
添加doc下载在1楼底下
第一章 zpool的管理 2
1.1 创建zpool 3
1.1.1 创建单边zpool 3
1.1.2 创建mirror pool 4
1.1.3 创建raidz zpool 6
1.2 删除zpool 7
1.3 对zpool进行管理 8
1.3.1 镜像和拆镜像 8
1.3.2 添加zpool空间 9
1.3.3 spare盘的添加和删除 10
1.4 zpool的维护/故障盘的更换 12
1.5 zpool的迁移 15
1.6 恢复销毁的zpool 16
1.7 zpool的I/O统计 18
1.8 迁移ZFS 存储池 19
1.9 zpool的版本升级 20
第二章 ZFS文件系统的建立和设置 22
2.1 ZFS文件系统的创建和删除 22
2.1.1 ZFS文件系统的创建 22
2.1.2 ZFS文件系统重命名 23
2.1.3 删除ZFS文件系统 23
2.2 ZFS属性介绍 24
2.3 查询ZFS文件系统信息 25
2.4 管理ZFS属性 25
2.4.1 设置set 26
2.4.2 继承inherit 27
2.4.3 查询get 27
2.4.4 ZFS文件系统的mount和umount 34
2.4.5 ZFS文件系统的share和unshare 36
2.4.6 ZFS文件系统的配额和预留空间 37
第三章 使用ZFS进行快照或者克隆 40
3.1 快照 snapshot 40
3.1.1 创建和销毁快照 40
3.1.2 快照的显示和重命名 41
3.1.3 使用快照回滚 41
3.2 克隆 43
3.2.1 创建clone 43
3.2.2 删除clone 44
3.2.3 使用clone来代替文件系统 44
3.3 快照的保存和恢复 45
3.3.1 快照的保存 45
3.3.2 使用快照文件恢复文件系统 46
附录:ZFS卷 48
ZFS主要使用两条命令及其子命令:
zfs
zpool
zpool 命令菜单:
# zpool
missing command
usage: zpool command args ...
where 'command' is one of the following:
create [-fn] [-R root] [-m mountpoint] <pool> <vdev> ...
destroy [-f] <pool>
add [-fn] <pool> <vdev> ...
remove <pool> <device>
list [-H] [-o field[,field]*] [pool] ...
iostat [-v] [pool] ... [interval [count]]
status [-vx] [pool] ...
online <pool> <device> ...
offline [-t] <pool> <device> ...
clear <pool> [device]
attach [-f] <pool> <device> <new_device>
detach <pool> <device>
replace [-f] <pool> <device> [new_device]
scrub [-s] <pool> ...
import [-d dir] [-D]
import [-d dir] [-D] [-f] [-o opts] [-R root] -a
import [-d dir] [-D] [-f] [-o opts] [-R root ] <pool | id> [newpool]
export [-f] <pool> ...
upgrade
upgrade -v
upgrade <-a | pool>
history [<pool>]
1.1 创建zpool
ZFS文件系统是建立在存储池pool的基础上,所以要建立文件系统必须先建立底层的pool。
1.1.1 创建单边zpool
zpool create yz c3t0d0 c3t0d1
# zpool create First c3t2d0 c3t4d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t2d0s2 contains a ufs filesystem.
/dev/dsk/c3t2d0s7 contains a ufs filesystem.
加入pool的硬盘可以是整块盘,也可以是某个分区,条件允许的时候建议使用整块盘,这样便于pool对硬盘的管理。由于加入pool的磁盘之前曾经使用过ufs,所以在创建过程中需要使用-f选项来忽视ufs文件格式将硬盘强制加入pool中。
Note:加入pool的硬盘的原来数据会被破坏。
# zpool create –f first c3t2d0 c3t3d0 创建两块盘组成的pool
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
first 136G 90K 136G 0% ONLINE -
# zpool status
pool: first
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
first ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
errors: No known data errors
1.1.2 创建mirror pool
# zpool create –f yz mirror c3t0d0 c3t1d0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
first 136G 90K 136G 0% ONLINE -
yz 68G 6.08G 61.9G 8% ONLINE -
# zpool status
pool: yz
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
yz ONLINE 0 0 0
mirror ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
errors: No known data errors
创建镜像盘组成的pool,如果加入的硬盘为多个的话,则默认的raid类型为raid1+0,下例中,镜像为3份,单盘为68G,而pool总量为136G。
RCGSM-root-/yztest/2> zpool create xxx mirror c3t0d0 c3t1d0 c3t2d0 mirror c3t3d0 c3t4d0 c3t5d0
RCGSM-root-/yztest/2> zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
xxx 136G 90K 136G 0% ONLINE -
RCGSM-root-/yztest/2> zpool status
pool: xxx
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
mirror ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
c3t5d0 ONLINE 0 0 0
errors: No known data errors
# zpool status -x
all pools are healthy
# df -h
Filesystem size used avail capacity Mounted on
first 134G 24K 134G 1% /first
1.1.3 创建raidz zpool
zpool支持的raidz有raidz1和raidz2两种,类似于传统的raid5,raidz至少需要3个devices来实现对数据的校验。raidz也就是raidz1会消耗一块盘的空间,raidz2消耗凉快盘的空间。
# zpool create yz raidz1 c3t0d0 c3t1d0 c3t2d0 c3t3d0
# zpool status
pool: yz
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
yz ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
errors: No known data errors
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
yz 272G 147K 272G 0% ONLINE -
# df -h
Filesystem size used avail capacity Mounted on
yz 200G 36K 200G 1% /yz
# zpool create yz raidz2 c3t0d0 c3t1d0 c3t2d0 c3t3d0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
yz 272G 226K 272G 0% ONLINE -
# df -h
Filesystem size used avail capacity Mounted on
yz 133G 36K 133G 1% /yz
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
yz 272G 226K 272G 0% ONLINE -
# zpool status
pool: yz
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
yz ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
errors: No known data errors
Note:同样是四块盘的raidz, raidz2消耗掉两块盘的空间,raidz1消耗一块盘的空间。raidz2的数据校验性应当是更强的。
1.2 删除zpool
zpool destroy poolname
如果zpool正在使用会提示设备正忙,强制删除pool使用-f参数。
RCGSM-root-/yztest/2> zpool destroy yz
cannot unmount '/yztest/2': Device busy
could not destroy 'yz': could not unmount datasets
RCGSM-root-/yztest/2> zpool destroy -f yz
[ 本帖最后由 sunshiene 于 2008-10-15 10:50 编辑 ]
ZFS 学习笔记.zip
________________________________________
sunshiene 回复于:2008-10-10 16:50:48
1.3 对zpool进行管理
1.3.1 镜像和拆镜像
如果创建zpool时,物理盘为单,可以对其进行镜像,命令格式为
zpool attach xxx c3t0d0/old c3t5d0//new
例子:
# zpool status
pool: xxx
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
errors: No known data errors
# zpool attach xxx c3t5d0
missing <new_device> specification
usage:
attach [-f] <pool> <device> <new_device>
# zpool attach xxx c3t0d0 c3t5d0
# zpool status xxx
pool: xxx
state: ONLINE
scrub: resilver completed with 0 errors on Tue Oct 7 17:11:18 2008
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
mirror ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t5d0 ONLINE 0 0 0
errors: No known data errors
使用zpool detach的子命令可以对镜像进行拆除
例子:
# zpool detach xxx c3t5d0
# zpool status
pool: xxx
state: ONLINE
scrub: resilver completed with 0 errors on Tue Oct 7 17:11:18 2008
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
errors: No known data errors
1.3.2 添加zpool空间
使用zpool add子命令对pool进行扩容,使用-n参数可以模拟出扩容后的效果。
例子:
# zpool status
pool: xxx
state: ONLINE
scrub: resilver completed with 0 errors on Tue Oct 7 17:11:18 2008
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
errors: No known data errors
# zpool add xxx c3t5d0
# zpool status xxx
pool: xxx
state: ONLINE
scrub: resilver completed with 0 errors on Wed Oct 8 08:53:02 2008
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t5d0 ONLINE 0 0 0
errors: No known data errors
# zpool list xxx
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
xxx 136G 91K 136G 0% ONLINE -
#/mnt> zpool status
pool: poolname
state: ONLINE
scrub: resilver completed with 0 errors on Wed Oct 8 13:16:35 2008
config:
NAME STATE READ WRITE CKSUM
poolname ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
#/mnt> zpool add -n poolname c3t1d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
#/mnt> zpool add -n -f poolname c3t1d0
would update 'poolname' to the following configuration:
poolname
raidz1
c3t0d0
c3t3d0
c3t4d0
c3t1d0
1.3.3 spare盘的添加和删除
对于raidz和mirror而言,zpool支持spare盘的设置,而且spare盘可以支持多个pool,比如在一个mirror和另一个raidz使用同一块硬盘来做spare。
# zpool add poolname spare c3t5d0
# zpool add xxx spare c3t5d0
# zpool status
pool: poolname
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolname ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
spares
c3t5d0 AVAIL
errors: No known data errors
pool: xxx
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
mirror ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
spares
c3t5d0 AVAIL
errors: No known data errors
删除spare盘使用zpool remove子命令
# zpool remove xxx c3t5d0
# zpool remove poolname c3t5d0
# zpool status -x
all pools are healthy
# zpool status -v
pool: poolname
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolname ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
pool: xxx
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
mirror ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
errors: No known data error
1.4 zpool的维护/故障盘的更换
当zpool内的硬盘出现问题需要更换的时候,就要使用zpool replace 子命令进行更换了。
#/mnt> zpool replace
missing pool name argument
usage:
replace [-f] <pool> <device> [new_device]
#/mnt> zpool replace poolname c3t2d0 c3t0d0
#/mnt> zpool status
pool: poolname
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 6.71% done, 0h3m to go
config:
NAME STATE READ WRITE CKSUM
poolname ONLINE 0 0 0
raidz1 ONLINE 0 0 0
replacing ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
#/mnt> zpool status
pool: poolname
state: ONLINE
scrub: resilver completed with 0 errors on Wed Oct 8 13:16:35 2008
config:
NAME STATE READ WRITE CKSUM
poolname ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
添加磁盘空间的时候可以对添加的硬盘进行数据保护,如mirror和raidz
#/mnt> zpool add -n -f poolname raidz2 c3t1d0 c3t2d0 c3t5d0
would update 'poolname' to the following configuration:
poolname
raidz1
c3t0d0
c3t3d0
c3t4d0
raidz2
c3t1d0
c3t2d0
c3t5d0
例子:raidz出现坏盘的解决
在本例中,我把poolname中的c3t0d0号盘offline之后将其拆卸出3310。
#/mnt> zpool offline poolname c3t0d0
Bringing device c3t0d0 offline
#/mnt> zpool status
pool: poolname
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: resilver completed with 0 errors on Wed Oct 8 13:16:35 2008
config:
NAME STATE READ WRITE CKSUM
poolname DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c3t0d0 OFFLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
这样在执行zpool offline后,拆除该硬盘,并执行online操作,这时会发现pool中的硬盘状态为FAULTED。这时候由于raidz的校验功能,可以通过replace对硬盘进行更换并恢复数据到新盘上。
#/mnt> zpool status
pool: poolname
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: resilver completed with 0 errors on Wed Oct 8 14:03:47 2008
config:
NAME STATE READ WRITE CKSUM
poolname DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c3t0d0 FAULTED 0 0 0 corrupted data
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
#/mnt> zpool replace poolname c3t0d0 c3t2d0
#/mnt> zpool status
pool: poolname
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 2.97% done, 0h3m to go
config:
NAME STATE READ WRITE CKSUM
poolname DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
replacing DEGRADED 0 0 0
c3t0d0 FAULTED 0 0 0 corrupted data
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
# zpool status
pool: poolname
state: ONLINE
scrub: resilver completed with 0 errors on Wed Oct 8 14:10:02 2008
config:
NAME STATE READ WRITE CKSUM
poolname ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
如果是在原来的槽位上更换过硬盘以后再同步,直接输入:
zpool replace poolname c3t0d0
# zpool replace poolname c3t2d0
# zpool status
pool: poolname
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 3.75% done, 0h2m to go
config:
NAME STATE READ WRITE CKSUM
poolname DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
replacing DEGRADED 0 0 0
c3t2d0s0/o UNAVAIL 0 0 0 cannot open
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
1.5 zpool的迁移
迁移zpool的时候需要确认zpool中所有的文件系统没有被使用,如果需要不顾一切的去迁移,可以加上-f参数。
# zpool export poolname
# zpool status
no pools available
#
# zpool import
pool: poolname
id: 10727618928512001646
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
poolname ONLINE
raidz1 ONLINE
c3t2d0 ONLINE
c3t3d0 ONLINE
c3t4d0 ONLINE
# zpool status
no pools available
# zpool import 10727618928512001646 使用pool的id和name都可以
# zpool status
pool: poolname
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolname ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
1.6 恢复销毁的zpool
zpool 被destroy后,可以通过zpool import –D命令来查看上一个被销毁的zpool是否可以被恢复。
#/zzz> cp -r /opt /zzz
^C
#/zzz> ls
opt
#/zzz> du -sh .
3.6G .
#/zzz> cd
# zpool destroy zzz
# zpool import -D
pool: zzz
id: 7148440739272373876
state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:
zzz ONLINE
c3t0d0 ONLINE
c3t1d0 ONLINE
# zpool import -Df zzz
# zpool status
pool: zzz
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zzz ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
errors: No known data errors
# cd /zzz
#/zzz> ls
opt
#/zzz> du –sh .
3.6G .
如果通过import –D看到的pool的状态不是online的话,那么这个pool就不可恢复了。
#/zzz> cd
# zpool destroy zzz
# zpool create zzz c3t0d0
# zpool import -D
pool: zzz
id: 7148440739272373876
state: FAULTED (DESTROYED)
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
config:
zzz UNAVAIL missing device
c3t1d0 ONLINE
Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
# zpool destroy zzz
# zpool import -D
pool: zzz
id: 7148440739272373876
state: FAULTED (DESTROYED)
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
config:
zzz UNAVAIL missing device
c3t1d0 ONLINE
Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
pool: zzz
id: 6600816109531139366
state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:
zzz ONLINE
c3t0d0 ONLINE
# zpool import -Df 7148440739272373876
cannot import 'zzz': one or more devices is currently unavailable
1.7 zpool的I/O统计
使用zpool iostat命令来查询zpool的I/O运行状况
# zpool iostat
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
poolname 64.7G 139G 0 1 34 115K
# zpool iostat -v
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
poolname 64.7G 139G 0 1 34 115K
raidz1 64.7G 139G 0 1 34 115K
c3t2d0 - - 0 0 170 57.6K
c3t3d0 - - 0 0 169 57.6K
c3t4d0 - - 0 0 164 57.6K
---------- ----- ----- ----- ----- ----- -----
# zpool iostat 2 2
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
poolname 64.7G 139G 0 1 34 115K
poolname 64.7G 139G 0 0 0 0
# zpool iostat -v 2 2
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
poolname 64.7G 139G 0 1 34 115K
raidz1 64.7G 139G 0 1 34 115K
c3t2d0 - - 0 0 169 57.6K
c3t3d0 - - 0 0 169 57.6K
c3t4d0 - - 0 0 164 57.6K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
poolname 64.7G 139G 0 0 0 0
raidz1 64.7G 139G 0 0 0 0
c3t2d0 - - 0 0 0 0
c3t3d0 - - 0 0 0 0
c3t4d0 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
1.8 迁移ZFS 存储池
zpool可以在主机之间进行切换,就像diskgroup和metaset一样,一般来说如果zpool的状态不是fault都可以在个连接主机之间进行export和import操作,fault的zpool不能被import,degrade状态由于能够
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
poolname 204G 64.7G 139G 31% ONLINE -
# zpool export poolname
# zpool list
no pools available
# zpool import
pool: poolname
id: 10727618928512001646
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
poolname ONLINE
raidz1 ONLINE
c3t2d0 ONLINE
c3t3d0 ONLINE
c3t4d0 ONLINE
# zpool import 10727618928512001646
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
poolname 204G 64.7G 139G 31% ONLINE -
当然,把一个高版本ZFS下建立的zpool迁移到低版本的ZFS下是不能成功的,版本只能向下兼容。
1.9 zpool的版本升级
如果将低版本操作系统下建立的zpool迁移到了高版本,则需要对zpool进行升级
使用zpool upgrade –v子命令来查看ZFS版本,使用zpool upgrade –a子命令来升级之前版本创建的zpool。
# zpool upgrade -a
This system is currently running ZFS version 4.
All pools are formatted using this version.
# zpool upgrade -v
This system is currently running ZFS version 4.
The following versions are supported:
VER DESCRIPTION
--- --------------------------------------------------------
1 Initial ZFS version
2 Ditto blocks (replicated metadata)
3 Hot spares and double parity RAID-Z
4 zpool history
For more information on a particular version, including supported releases, see:
http://www.opensolaris.org/os/community/zfs/version/N
Where 'N' is the version number.
Note:如果将池升级到最新版本,则在运行较早ZFS 版本的系统中将无法访问这些池。
[ 本帖最后由 sunshiene 于 2008-10-13 16:02 编辑 ]
________________________________________
sunshiene 回复于:2008-10-10 16:51:40
第二章 ZFS文件系统的建立和设置
在创建了一个pool之后,系统会默认将生成一个与pool名称相同的文件系统,如果pool名称与/目录下的子目录相同,pool创建会不成功。这时需要在创建pool的时候指定pool中文件系统的mountpoint如:
# zpool create opt c3t5d0
default mountpoint '/opt' exists and is not empty
use '-m' option to provide a different default
# zpool create -m /test opt c3t5d0
# df -h
Filesystem size used avail capacity Mounted on
first 134G 24K 134G 1% /first
# df -h
Filesystem size used avail capacity Mounted on
……
poolname 134G 10.0G 124G 8% /poolname
opt 67G 24K 67G 1% /test
Note:当pool建立的时候挂接点会自动被创建,pool被删除的时候该目录也会被自动删除。如果mountpoint是提前建立的,需要确保该目录为空。
Note:ZFS文件系统和mountpoint的概念需要区分,ZFS文件系统的是对于zpool而言,挂接点仅仅是挂接点而已。
2.1 ZFS文件系统的创建和删除
2.1.1 ZFS文件系统的创建
当zpool被创建时,系统会默认将生成一个与pool名称相同的文件系统。
# zpool create qqq c3t1d0
# df -h
Filesystem size used avail capacity Mounted on
qqq 67G 24K 67G 1% /qqq
这样的话在删除zpool的时候/qqq这个目录也会被自动删除
在qqq这个pool里面再创建一个zfs文件系统
# zfs create qqq/3
# df -h
Filesystem size used avail capacity Mounted on
qqq 67G 26K 67G 1% /qqq
qqq/3 67G 24K 67G 1% /qqq/3
创建zfs文件系统时,默认会在父zpool的挂接点下创建和子zpool相同的目录名。
# zfs create poolname/ttt/q
# df -h
Filesystem size used avail capacity Mounted on
poolname 134G 33K 134G 1% /poolname
poolname/qqqq/qqqq 134G 33K 134G 1% /qqq
poolname/qqqq 134G 33K 134G 1% /te
poolname/ttt 134G 33K 134G 1% /ttt
poolname/ttt/q 134G 33K 134G 1% /ttt/q
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 320K 134G 32.6K /poolname
poolname/qqqq 65.3K 134G 32.6K /te
poolname/qqqq/qqqq 32.6K 134G 32.6K /qqq
poolname/ttt 67.9K 134G 35.3K /ttt
poolname/ttt/q 32.6K 134G 32.6K /ttt/q
2.1.2 ZFS文件系统重命名
使用zfs rename子命令对zfs文件系统进行重命名。
Note:此处命名是针对于zpool划分出来的文件系统,更不是mountpoint。
# zfs create poolname/test
# zfs rename poolname/test poolname/ttt
# df -h
Filesystem size used avail capacity Mounted on
poolname 134G 35K 134G 1% /poolname
poolname/ttt 134G 32K 134G 1% /poolname/ttt
#
2.1.3 删除ZFS文件系统
# zfs destroy qqq/3
# df -h
Filesystem size used avail capacity Mounted on
poolname 134G 34K 134G 1% /poolname
poolname/test 134G 32K 134G 1% /poolname/test
# zfs create poolname/test/1
# zfs destroy poolname/test
cannot destroy 'poolname/test': filesystem has children
use '-r' to destroy the following datasets:
poolname/test/1
# zfs destroy -r poolname/test
使用 –r 参数来删除zfs文件系统机器子系统
2.2 ZFS属性介绍
ZFS的属性分为两类:本机属性和用户属性。
其中本机属性又分为可设置和不可设置属性。
The following properties are supported:
PROPERTY EDIT INHERIT VALUES
type NO NO filesystem | volume | snapshot
creation NO NO <date>
used NO NO <size>
available NO NO <size>
referenced NO NO <size>
compressratio NO NO <1.00x or higher if compressed>
mounted NO NO yes | no | -
origin NO NO <snapshot>
quota YES NO <size> | none
reservation YES NO <size> | none
volsize YES NO <size>
volblocksize NO NO 512 to 128k, power of 2
recordsize YES YES 512 to 128k, power of 2
mountpoint YES YES <path> | legacy | none
sharenfs YES YES on | off | share(1M) options
checksum YES YES on | off | fletcher2 | fletcher4 | sha256
compression YES YES on | off | lzjb
atime YES YES on | off
devices YES YES on | off
exec YES YES on | off
setuid YES YES on | off
readonly YES YES on | off
zoned YES YES on | off
snapdir YES YES hidden | visible
aclmode YES YES discard | groupmask | passthrough
aclinherit YES YES discard | noallow | secure | passthrough
canmount YES NO on | off
shareiscsi YES YES on | off | type=<type>
xattr YES YES on | off
Sizes are specified in bytes with standard units such as K, M, G, etc.
各参数说明详见《Solaris ZFS 管理指南》
http://docs.sun.com/app/docs/doc/819-7065?l=zh&a=load
zfs get all poolname
zfs get option1,option2 poolname
canmount
canmount YES NO on | off
定义了zfs是否可挂接,VALUES为YES说明该文件系统可以挂接,如果为NO则说明不行,即使用zfs mount命令无法使文件系统挂接。INHERIT为NO说明该属性不遗传给子zfs。那么,父zfs就可以只作为一个容器来使用。
# zfs set canmount=off poolname
# zfs create -o mountpoint=/AAA poolname/AAA
# zfs create -o mountpoint=/BBB poolname/BBB
# zfs umount -a
# zfs mount -a
# df -h
Filesystem size used avail capacity Mounted on
poolname/AAA 134G 32K 134G 1% /AAA
poolname/BBB 134G 32K 134G 1% /BBB
poolname/qqqq/qqqq 134G 32K 134G 1% /qqq
poolname/qqqq 134G 32K 134G 1% /te
poolname/ttt 134G 35K 134G 1% /ttt
poolname/ttt/q 134G 32K 134G 1% /ttt/q
2.3 查询ZFS文件系统信息
ZFS文件系统的状态使用zfs list命令查询。
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 43.1G 90.7G 32.6K /poolname
poolname/AAA 9.80G 90.7G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
此处为文件系统poolname/AAA的snapshot
poolname/BBB 32.6K 90.7G 32.6K /ABC
poolname/qqqq 65.3K 90.7G 32.6K /te
poolname/qqqq/qqqq 32.6K 90.7G 32.6K /qqq
poolname/ttt 33.3G 90.7G 6.73M /ttt
poolname/ttt/q 33.3G 90.7G 33.3G /ttt/q
# zfs list -r poolname/qqqq
NAME USED AVAIL REFER MOUNTPOINT
poolname/qqqq 65.3K 90.7G 32.6K /te
poolname/qqqq/qqqq 32.6K 90.7G 32.6K /qqq
2.4 管理ZFS属性
ZFS的属性管理包括:设置/set、继承/inherit、查询/get
2.4.1 设置set
使用zfs set可以设置ZFS文件系统的属性值,也可以通过在zfs create的时候通过-o选项来设置属性。
例子:
# zfs get atime
NAME PROPERTY VALUE SOURCE
poolname atime on default
poolname/AAA atime on default
poolname/AAA@1 atime - -
poolname/BBB atime on default
poolname/qqqq atime on default
poolname/qqqq/qqqq atime on default
poolname/ttt atime on default
poolname/ttt/q atime on default
# zfs set atime=off poolname/qqqq
# zfs get atime poolname/qqqq
NAME PROPERTY VALUE SOURCE
poolname/qqqq atime off local
# zfs get atime
NAME PROPERTY VALUE SOURCE
poolname atime on default
poolname/AAA atime on default
poolname/AAA@1 atime - -
poolname/BBB atime on default
poolname/qqqq atime off local
poolname/qqqq/qqqq atime off inherited from poolname/qqqq
poolname/ttt atime on default
poolname/ttt/q atime on default
从上例可以看出在对poolname/qqqq进行设置时,默认会将属性传递给子文件系统。说明该属性具有传递性。
在创建zfs 文件系统时设置
# zfs create -o atime=off poolname/1111
# zfs get atime poolname/1111
NAME PROPERTY VALUE SOURCE
poolname/1111 atime off local
# zfs set quota=10g poolname/1111
# zfs get quota poolname/1111
NAME PROPERTY VALUE SOURCE
poolname/1111 quota 10G local
2.4.2 继承inherit
一般来说子文件系统会从它的上一层文件系统的属性来继承。
# zfs get -r sharenfs poolname/qqqq
NAME PROPERTY VALUE SOURCE
poolname/qqqq sharenfs off default
poolname/qqqq/qqqq sharenfs on local
# zfs inherit sharenfs poolname/qqqq/qqqq
# zfs get -r sharenfs poolname/qqqq
NAME PROPERTY VALUE SOURCE
poolname/qqqq sharenfs off default
poolname/qqqq/qqqq sharenfs off default
2.4.3 查询get
查询zfs文件系统属性最简单的命令是zfs list,但是对于更详细的信息需要用zfs get来查询。
# zfs get sharenfs
NAME PROPERTY VALUE SOURCE
poolname sharenfs on local
poolname/1111 sharenfs on inherited from poolname
poolname/AAA sharenfs on inherited from poolname
poolname/AAA@1 sharenfs - -
poolname/BBB sharenfs on inherited from poolname
poolname/qqqq sharenfs off local
poolname/qqqq/qqqq sharenfs off inherited from poolname/qqqq
poolname/ttt sharenfs on inherited from poolname
poolname/ttt/q sharenfs on inherited from poolname
-s参数用来过滤属性的设置来源,-s的值可以是local、inherit、temporary和none。
default 从来不为数据集或其任何祖先显式设置该属性。使用的是该属
性的缺省值。
inherited from dataset-name 该属性值继承自dataset-name 所指定的父级。
local 使用zfs set 可为此数据集显式设置该属性值。
temporary 该属性值是使用zfs mount -o 选项设置的,并且仅在挂载的生命
周期内有效。
none 该属性是只读属性。其值由ZFS 生成。
# zfs get -s inherited sharenfs
NAME PROPERTY VALUE SOURCE
poolname/1111 sharenfs on inherited from poolname
poolname/AAA sharenfs on inherited from poolname
poolname/BBB sharenfs on inherited from poolname
poolname/qqqq/qqqq sharenfs off inherited from poolname/qqqq
poolname/ttt sharenfs on inherited from poolname
poolname/ttt/q sharenfs on inherited from poolname
# zfs get all 用来显示所有文件系统机器快照的所有参数
NAME PROPERTY VALUE SOURCE
poolname type filesystem -
poolname creation Wed Oct 8 9:59 2008 -
poolname used 43.1G -
poolname available 90.7G -
poolname referenced 32.6K -
poolname compressratio 1.00x -
poolname mounted no -
poolname quota none default
poolname reservation none default
poolname recordsize 128K default
poolname mountpoint /poolname default
poolname sharenfs on local
poolname checksum on default
poolname compression off default
poolname atime on default
poolname devices on default
poolname exec on default
poolname setuid on default
poolname readonly off default
poolname zoned off default
poolname snapdir hidden default
poolname aclmode groupmask default
poolname aclinherit secure default
poolname canmount off local
poolname shareiscsi off default
poolname xattr on default
poolname/1111 type filesystem -
poolname/1111 creation Mon Oct 13 13:32 2008 -
poolname/1111 used 32.6K -
poolname/1111 available 10.0G -
poolname/1111 referenced 32.6K -
poolname/1111 compressratio 1.00x -
poolname/1111 mounted yes -
poolname/1111 quota 10G local
poolname/1111 reservation none default
poolname/1111 recordsize 128K default
poolname/1111 mountpoint /poolname/1111 default
poolname/1111 sharenfs on inherited from poolname
poolname/1111 checksum on default
poolname/1111 compression off default
poolname/1111 atime off local
poolname/1111 devices on default
poolname/1111 exec on default
.......
lname/ttt/q type filesystem -
poolname/ttt/q creation Thu Oct 9 14:03 2008 -
poolname/ttt/q used 33.3G -
poolname/ttt/q available 90.7G -
poolname/ttt/q referenced 33.3G -
poolname/ttt/q compressratio 1.00x -
poolname/ttt/q mounted yes -
poolname/ttt/q quota none default
poolname/ttt/q reservation none default
poolname/ttt/q recordsize 128K default
poolname/ttt/q mountpoint /ttt/q inherited from poolname/ttt
poolname/ttt/q sharenfs on inherited from poolname
poolname/ttt/q checksum on default
poolname/ttt/q compression off default
poolname/ttt/q atime on default
poolname/ttt/q devices on default
poolname/ttt/q exec on default
poolname/ttt/q setuid on default
poolname/ttt/q readonly off default
poolname/ttt/q zoned off default
poolname/ttt/q snapdir hidden default
poolname/ttt/q aclmode groupmask default
poolname/ttt/q aclinherit secure default
poolname/ttt/q canmount on default
poolname/ttt/q shareiscsi off default
poolname/ttt/q xattr on default
[ 本帖最后由 sunshiene 于 2008-10-13 16:05 编辑 ]
________________________________________
win_study 回复于:2008-10-10 17:17:09
请教:ZFS现在是不是只在OpenSolaris中采用?10/08的Solaris 10是ZFS文件系统吗?
________________________________________
yuhuohu 回复于:2008-10-10 17:22:45
solaris 10自然有zfs
________________________________________
sunshiene 回复于:2008-10-10 17:28:07
引用:原帖由 win_study 于 2008-10-10 17:17 发表 [url=http://bbs.chinaunix.net/redirect.php?goto=findpost&pid=9425263&ptid=1284850]
请教:ZFS现在是不是只在OpenSolaris中采用?10/08的Solaris 10是ZFS文件系统吗?
10/08的Solaris 10开始支持ZFS作为初始文件系统类型了,之前的版本都不支持。期待中。11/06以后的Solaris都是支持ZFS的
________________________________________
anfield 回复于:2008-10-11 02:09:11
10/08开始支持ZFS启动和ZFS根文件系统。
对ZFS感兴趣的话,可以看看下面的文章:
ZFS: the last word in file system
http://www.sun.com/2004-0914/feature/
和Jeff Bonwick自己的blog:
http://blogs.sun.com/bonwick/en_US/category/ZFS
最近Sun和NetApp因为ZFS正在打官司呢...
________________________________________
小把戏 回复于:2008-10-11 09:23:18
ZFS的自动化非常先进啊。
估计Linux在不远的将来也会用上
________________________________________
sunshiene 回复于:2008-10-13 16:06:23
发现不够写了:) 赶紧再占几个位子
2.4.4 ZFS文件系统的mount和umount
缺省情况下,所有ZFS 文件系统都由ZFS 通过使用SMF 的svc://system/filesystem/local 服务在引导时挂载,这样就不需要编辑/etc/vfstab文件。
对于zpool而言,poolname.1111是文件系统名,而/poolname/1111是挂接点;而对于操作系统而言/poolname/1111则是文件系统名。挂接点可以更改,但是zpool里的zfs文件系统名不能被更改。使用zfs set mountpoint=/path xxx可以更改挂接点。
再介绍一下两个和挂接有关的参数:mountpoint和canmount,这两个参数都具有遗传性。
mountpoint是zfs文件系统的挂接点,canmount定义了文件系统是否可以被mount,值为on/off以下面的输出为例:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 43.1G 90.7G 32.6K /poolname
poolname/1111 32.6K 10.0G 32.6K /poolname/1111
ZFS文件系统的挂接可以用两种方式来完成,zfs mount和mount –F zfs来完成。
命令格式:
mount
mount [-o opts] [-O] –a
-a挂在所有的文件系统,-o指定选项。
mount [-o opts] [-O] <filesystem>
unmount [-f] –a 卸载所有zfs文件系统
unmount [-f] <filesystem|mountpoint>
# zfs mount
poolname/AAA /AAA
poolname/BBB /ABC
poolname/qqqq/qqqq /qqq
# zfs umount poolname/qqqq/qqqq
# zfs mount poolname/qqqq/qqqq
# zfs mount
poolname/AAA /AAA
poolname/BBB /ABC
poolname/qqqq/qqqq /qqq
再使用mount –F zfs 之前需要对文件系统的mountpoint参数进行设置。
# mount -F zfs poolname/ttt/q /te
filesystem 'poolname/ttt/q' cannot be mounted using 'mount -F zfs'
Use 'zfs set mountpoint=/te' instead.
If you must use 'mount -F zfs' or /etc/vfstab, use 'zfs set mountpoint=legacy'.
See zfs(1M) for more information.
# zfs set mountpoint=legacy poolname/ttt/q
# mount -F zfs poolname/ttt/q /mnt
# df -h
Filesystem size used avail capacity Mounted on
poolname/ttt/q 134G 33G 91G 27% /mnt
#
-a参数的使用。-a参数用来对all文件系统进行mount,这里将mount和umount放在一起举例。umount的使用比moun更简单,因为它没有-o参数,只有-f强制参数。在上面的例子中已经使用过了zfs mount来进行文件系统卸载,下面介绍umount –a和mount -a
# zfs mount
poolname/AAA /AAA
poolname/BBB /ABC
poolname/qqqq/qqqq /qqq
poolname/1111 /poolname/1111
poolname/ttt /ttt
poolname/qqqq /te
poolname/ttt/q /mnt
# zfs umount -a
# zfs mount
poolname/ttt/q /mnt
# umount /mnt
# zfs set mountpoint=/aaa poolname/ttt/q
# zfs mount -a
# zfs mount
poolname/ttt/q /aaa
poolname/AAA /AAA
poolname/BBB /ABC
poolname/1111 /poolname/1111
poolname/qqqq/qqqq /qqq
poolname/qqqq /te
poolname/ttt /ttt
#
-o参数 zfs mount的-o参数和unix命令mount的参数意义相同,气候可以跟ro、rw的权限。
# zfs umount poolname/ttt
# zfs mount -o ro poolname/ttt
# mount|grep /ttt
/ttt on poolname/ttt read only/setuid/devices/exec/xattr/atime/dev=401005e on Mon Oct 13 15:07:27 2008
-O 可以将文件系统挂接到非空的目录下
# zfs umount poolname/ttt
# cd /ttt
# touch 1 2 3 4 5
# ls
1 2 3 4 5
# zfs mount poolname/ttt
cannot mount '/ttt': directory is not empty
# zfs mount -O poolname/ttt
# df -h|grep poolname/ttt
poolname/ttt/q 134G 33G 91G 27% /aaa
poolname/ttt 134G 6.7M 91G 1% /ttt
2.4.5 ZFS文件系统的share和unshare
sharenfs参数在文件系统共享中是个很重要的参数,当这个参数的值为ON且执行zfs share filesys的时候,该文件系统即被共享出去,执行zfs unshared filesys之后,取消共享。
# zfs share
missing filesystem argument
usage:
share -a
share <filesystem>
For the property list, run: zfs set|get
# zfs share -a
# zfs share
# share
- /AAA rw ""
- /ABC rw ""
- /aaa rw ""
- /poolname/1111 rw ""
- /ttt rw ""
# zfs unshare
missing filesystem argument
usage:
unshare [-f] -a
unshare [-f] <filesystem|mountpoint>
For the property list, run: zfs set|get
# zfs unshare -a
# share
#
如果设置共享模式为只读
# zfs set sharenfs=ro poolname/ttt
# zfs share poolname/ttt
# share
- /ttt ro ""
取消ZFS文件系统的共享,可以通过zfs unshare和share两种方式来完成。
# zfs share poolname/AAA
# zfs share poolname/BBB
# share
- /AAA rw ""
- /ABC rw ""
# zfs unshare poolname/AAA
# share
- /ABC rw ""
# unshare /ABC
# share
2.4.6 ZFS文件系统的配额和预留空间
配额和预留分别通过两个不同的参数的值来进行限制。这两个参数都可以通过zfs set来设置
quota 配额,文件系统最多可以使用多少空间
reservation 预留,文件系统自身或者其父文件系统需要预留给该文件系统使用的空间
在zfs文件系统中,显示一个文件系统实际大小的其实是REFER参数,而不是USED。
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 43.1G 90.7G 32.6K /poolname
poolname/1111 32.6K 10.0G 32.6K /poolname/1111
poolname/AAA 9.80G 90.7G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/BBB 32.6K 90.7G 32.6K /ABC
poolname/qqqq 65.3K 90.7G 32.6K /te
poolname/qqqq/qqqq 32.6K 90.7G 32.6K /qqq
poolname/ttt 33.3G 90.7G 6.73M /ttt
poolname/ttt/q 33.3G 90.7G 33.3G /aaa
实际上使用量最大的文件系统时挂接在./aaa的poolname/ttt/q
Note:zfs list命令有时候和df –k一样没有du –sh 准确。有时候也会出现误报。
# mkfile 2g test
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 18.8G 115G 32.6K /poolname
poolname/1 1.39G 8.61G 35.3K /poolname/1
poolname/1/1 1.39G 8.61G 1.39G /poolname/1/1
poolname/AAA 9.80G 115G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/ttt 6.73M 115G 6.73M /ttt
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 18.8G 115G 32.6K /poolname
poolname/1 1.74G 8.26G 35.3K /poolname/1
poolname/1/1 1.74G 8.26G 1.74G /poolname/1/1
poolname/AAA 9.80G 115G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/ttt 6.73M 115G 6.73M /ttt
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 18.8G 115G 32.6K /poolname
poolname/1 1.85G 8.15G 35.3K /poolname/1
poolname/1/1 1.85G 8.15G 1.85G /poolname/1/1
poolname/AAA 9.80G 115G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/ttt 6.73M 115G 6.73M /ttt
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 18.8G 115G 32.6K /poolname
poolname/1 1.94G 8.06G 35.3K /poolname/1
poolname/1/1 1.94G 8.06G 1.94G /poolname/1/1
poolname/AAA 9.80G 115G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/ttt 6.73M 115G 6.73M /ttt
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 18.8G 115G 32.6K /poolname
poolname/1 2.00G 8.00G 35.3K /poolname/1
poolname/1/1 2.00G 8.00G 2.00G /poolname/1/1
poolname/AAA 9.80G 115G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/ttt 6.73M 115G 6.73M /ttt
# du -sh .
2.0G .
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 18.8G 115G 32.6K /poolname
poolname/1 2.00G 8.00G 35.3K /poolname/1
poolname/1/1 2.00G 8.00G 2.00G /poolname/1/1
poolname/AAA 9.80G 115G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/ttt 6.73M 115G 6.73M /ttt
做个实验,看看quota的配额是否有效。
# zfs create poolname/1/2
# zfs set quota=1g poolname/1/2
# cd ../2
# mkfile 2g test
test: initialized 1074274304 of 2147483648 bytes: Disc quota exceeded
# du -sh .
1.0G .
说明quota的配额起作用了。
接着上面的实验再做一个来验证reservation是否有效。
# zfs set quota=none poolname/1/2
RCGSM-root-/poolname/1/2> zfs set reservation=9.5g poolname/1/2
cannot set property for 'poolname/1/2': size is greater than available space
RCGSM-root-/poolname/1/2> zfs set reservation=9.5g poolname/1/2
RCGSM-root-/poolname/1/2> rm test
RCGSM-root-/poolname/1/2> zfs set reservation=9.5g poolname/1/2
cannot set property for 'poolname/1/2': size is greater than available space
RCGSM-root-/poolname/1/2> ls
RCGSM-root-/poolname/1/2> du -sh
2K .
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
poolname 18.8G 115G 32.6K /poolname
poolname/1 2.00G 8.00G 36.6K /poolname/1
poolname/1/1 2.00G 8.00G 2.00G /poolname/1/1
poolname/1/2 32.6K 8.00G 32.6K /poolname/1/2
poolname/AAA 9.80G 115G 9.80G /AAA
poolname/AAA@1 30.0K - 32.6K -
poolname/ttt 6.73M 115G 6.73M /ttt
# cd ../1
# ls
test
# rm test
# zfs set reservation=9.5g poolname/1/2
# cd ../1
# mkfile 1g test2
test2: initialized 537010176 of 1073741824 bytes: Disc quota exceeded
# du -sh .
512M .
由此证明,quota和reservation参数都是有效的。