4

On my FreeNAS NAS (9.1.1 running zfs v28), I am getting terrible performance for file moves between two directories in the same raidz fs. Is this expected? How can I fault-find this, if not?

The application in this case is Beets (mp3 mgmt software), running in a jail on the NAS itself, so it isn't a case of CIFS performance, or network issues - the data doesn't leave the server. All the software is doing is renames into a different directory, but the performance is as if it is copying all the data.

The system is not under any particular load. I have actually stopped the other processes running on the server just to free up some memory and CPU, just in case.

Updated: The two directories are on the same mountpoint within the jail. The pool is 4 x 2TB SATA drives in a raidz1. No dedupe or compression.

Updated 2: disabling atime on the fs also makes no difference (thought I may as well try it).

Update 3: zfs/zpool output.

[root@Stillmatic2] ~# zpool status
  pool: jumbo1
 state: ONLINE
  scan: scrub repaired 0 in 95h19m with 0 errors on Wed Jul 16 23:20:06 2014
config:

        NAME        STATE     READ WRITE CKSUM
        jumbo1      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors

[root@Stillmatic2] ~# zfs list
NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
jumbo1                                                      5.32T  21.4G  40.4K  /mnt/jumbo1
jumbo1/data                                                 76.0G  21.4G  76.0G  /mnt/jumbo1/data
jumbo1/howie                                                2.03G  21.4G  2.03G  /mnt/jumbo1/howie
jumbo1/jails                                                45.1G  21.4G   139M  /mnt/jumbo1/jails
jumbo1/jails/.warden-template-9.1-RELEASE-amd64              347M  21.4G   347M  /mnt/jumbo1/jails/.warden-template-9.1-RELEASE-amd64
jumbo1/jails/.warden-template-9.1-RELEASE-amd64-pluginjail   853M  21.4G   852M  /mnt/jumbo1/jails/.warden-template-9.1-RELEASE-amd64-pluginjail
jumbo1/jails/hj-tools                                       43.8G  21.4G  44.1G  /mnt/jumbo1/jails/hj-tools
jumbo1/movies                                               1.56T  21.4G  1.56T  /mnt/jumbo1/movies
jumbo1/music                                                1.45T  21.4G  1.45T  /mnt/jumbo1/music
jumbo1/tv                                                   2.19T  21.4G  2.19T  /mnt/jumbo1/tv

1 Answers1

5

21GB out of ~6TB available => <1% Freespace. ZFS recommends 20% freespace for RAIDZ, and at least 10% is mostly mandatory for any reasonable performance. You need to free up some space or expand the size of the array.

Side nodes:

  1. SATA drives need to be scrubbed weekly if you expect to detect array failures before you get into likely data-loss territory. Looks like it's been a month since the last scrub.
  2. You're probably still in the whole percent chances of array failure upon rebuild because of the way that works. See What counts as a 'large' raid 5 array? for details.
Chris S
  • 78,455