Linux上的ZFS写入速度慢?

Linux上的ZFS写入速度慢?,linux,performance,ubuntu,raid,zfs,Linux,Performance,Ubuntu,Raid,Zfs,决定在我的5x 2TB WD绿色高级格式驱动器上为linux v28上的ZFS提供一个go本机端口ubuntu。 使用zpool create-o ashift=12 raidz1创建池 zpool状态没有显示任何不真实的内容 我将dd if=/dev/zero添加到装载的池中,写入速度永远不会超过20M/s。我试图rsync几百gig的文件,但即使如此,“zpool iostat”也给了我最大2000万次写入。Cpu使用率不是很高,我的8GB内存使用率为90%——我相信这是正常的 对我来说,阅

决定在我的5x 2TB WD绿色高级格式驱动器上为linux v28上的ZFS提供一个go本机端口ubuntu。 使用zpool create-o ashift=12 raidz1创建池

zpool状态没有显示任何不真实的内容

我将dd if=/dev/zero添加到装载的池中,写入速度永远不会超过20M/s。我试图rsync几百gig的文件,但即使如此,“zpool iostat”也给了我最大2000万次写入。Cpu使用率不是很高,我的8GB内存使用率为90%——我相信这是正常的

对我来说,阅读能力似乎是最佳的

我确实和zfs_vdev_max/min_玩过。由于启用了AHCI,我尝试将这些值设置为1,但这将我的写操作减少到10M。将其提高到4/8的最小/最大值可将其恢复为20M的写入速度

我现在正在做一次擦洗,速度是170M/s

我想我一定错过了什么?还是这很正常

附件是我的设置。忽略稀疏文件,保存它以在以后替换为磁盘

zdb:


应移动到unix.stackoverflow.com。应移动到unix.stackoverflow.com。
myData:
version: 28
name: 'myData'
state: 0
txg: 12
pool_guid: 14947267682211456191
hostname: 'microserver'
vdev_children: 1
vdev_tree:
    type: 'root'
    id: 0
    guid: 14947267682211456191
    create_txg: 4
    children[0]:
        type: 'raidz'
        id: 0
        guid: 361537219350560701
        nparity: 1
        metaslab_array: 31
        metaslab_shift: 36
        ashift: 12
        asize: 10001923440640
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'file'
            id: 0
            guid: 18296057043113196254
            path: '/tmp/sparse2'
            DTL: 35
            create_txg: 4
            offline: 1
        children[1]:
            type: 'disk'
            id: 1
            guid: 13192250717230911873
            path: '/dev/disk/by-id/wwn-0x50014ee2062a07cd-part1'
            whole_disk: 1
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 7673445363652446830
            path: '/dev/disk/by-id/wwn-0x50014ee25bd8fbcc-part1'
            whole_disk: 1
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 1997560602751946723
            path: '/dev/disk/by-id/wwn-0x50014ee25b1edbc8-part1'
            whole_disk: 1
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 16890030280879643154
            path: '/dev/disk/by-id/wwn-0x50014ee25b7f2562-part1'
            whole_disk: 1
            create_txg: 4

zfs get all myData:

NAME    PROPERTY              VALUE                  SOURCE
myData  type                  filesystem             -
myData  creation              Tue Apr 24  8:15 2012  -
myData  used                  2.05T                  -
myData  available             5.07T                  -
myData  referenced            2.05T                  -
myData  compressratio         1.00x                  -
myData  mounted               yes                    -
myData  quota                 none                   default
myData  reservation           none                   default
myData  recordsize            128K                   default
myData  mountpoint            /myData                default
myData  sharenfs              off                    default
myData  checksum              on                     default
myData  compression           off                    default
myData  atime                 on                     default
myData  devices               on                     default
myData  exec                  on                     default
myData  setuid                on                     default
myData  readonly              off                    default
myData  zoned                 off                    default
myData  snapdir               hidden                 default
myData  aclinherit            restricted             default
myData  canmount              on                     default
myData  xattr                 on                     default
myData  copies                1                      default
myData  version               5                      -
myData  utf8only              off                    -
myData  normalization         none                   -
myData  casesensitivity       sensitive              -
myData  vscan                 off                    default
myData  nbmand                off                    default
myData  sharesmb              off                    default
myData  refquota              none                   default
myData  refreservation        none                   default
myData  primarycache          all                    default
myData  secondarycache        all                    default
myData  usedbysnapshots       0                      -
myData  usedbydataset         2.05T                  -
myData  usedbychildren        9.68M                  -
myData  usedbyrefreservation  0                      -
myData  logbias               latency                default
myData  dedup                 off                    default
myData  mlslabel              none                   default
myData  sync                  standard               default
myData  refcompressratio      1.00x                  -

zpool get all myData:

NAME    PROPERTY       VALUE       SOURCE
myData  size           9.06T       -
myData  capacity       28%         -
myData  altroot        -           default
myData  health         DEGRADED    -
myData  guid           14947267682211456191  default
myData  version        28          default
myData  bootfs         -           default
myData  delegation     on          default
myData  autoreplace    off         default
myData  cachefile      -           default
myData  failmode       wait        default
myData  listsnapshots  off         default
myData  autoexpand     off         default
myData  dedupditto     0           default
myData  dedupratio     1.00x       -
myData  free           6.49T       -
myData  allocated      2.57T       -
myData  readonly       off         -
myData  ashift         12          local