2

I'm using Ubuntu 11.10 Desktop x64 with Native ZFS using a mirrored pool of 2x2 TB 6.0 Gbps hard drives. My issue is that I'm only getting about 30 Mb/s read/write at any time, I would think my system could perform faster.

There are some limitations though:

  • I'm using an Asus E350M1-I Deluxe Fusion which is a 1.6 Ghz processor and a maximum of 8 Gb ram, which I got. I didn't know about ZFS when I bought the system and these days I would've selected a system capable of more ram.

  • My pool has about 15% free space, but performance wasn't that much better when I had more than 50% free space.

  • When the processor is very busy the read/write performance seems to decrease, so it may very well be the processor that is the bottle neck.

I've read the other posts on this site about using an SSD as a log cache device which is what I'm thinking of doing, considering I don't have that much ram.

My questions:

  1. Do you think adding a SSD as log cache device will improve performance?

  2. Should I instead get another 2 TB hard drive and make a RAID-Z pool instead? (I'm gonna need the space eventually, however the price is still high on mechanical drives) Would this increase performance?

  3. Sell my system and go for an Intel i3 instead?

Thanks for your time!

ewwhite
  • 201,205
knorrhane
  • 113

4 Answers4

5

Note that due to licensing concerns ZFS is not a native filesystem within the Linux Kernel but a FUSE implementation in userspace. As such, it has significant operational overhead which is also well-visible in benchmarks. I believe this to be the main problem here - a high amount of overhead in conjunction with the rather low processing performance of your system.

In general, adding an SSD in whatever capacity will only be of any help if I/O is actually a bottleneck. Use iostat to verify this.

Adding an SSD as a separate log device will only help if your main problem is the synchronous write load. It will not do anything to reads or asynchronous writes (which are cached and lazy-written). As a simple yet quite effective test, you should temporarily disable the intent log - if your overall performance increases significantly, you would benefit from an SSD log device.

Adding an SSD as a L2ARC will help your reads if you have a rather compact "hot" area on your filesystem which is read in a random fashion frequently. L2ARC does not cache sequential transfers, so it would be rather ineffectual for streaming loads.

the-wabbit
  • 41,352
0

using an SSD as a log cache device which is what I'm thinking of doing, considering I don't have that much ram.

Eh? Main system RAM has nothing to do with it. Availabiltiy of RAM has a big impact on I/O performance - but you cannot use RAM for the disk journal (ZIL) - storage must be non-volatile.

You seem rather confused about how to solve your current problems - which makes me think your reasons for choosing ZFS may be flawed. It certainly interesting technically and has obvious benefits in managing very large volume groups, but that does not apply here - and I've not seen anyone recommending it over the usual suspects on Linux for performance. Have you tried running the same workloads on XFS or ext4? You'll probably find them a lot faster.

Given the price of a SSD to support this (see also my question here - Flash won't work) it's hard to understand why you think that an SSD will be a cost-effective way to improve performance. Yse, it will make it go faster - but I think you'd be better spending the money on a new dual-socket mobo, CPUs and doubling the memory (and you'll still have change left).

symcbean
  • 23,767
  • 2
  • 38
  • 58
0

I put native ZFS through testing for our servers, and I found it unreliable and lost data in tests. I also found the performance low even with ample CPU resources. I was using it to supply block devices (essentially an LVM replacement with integrity), not as a filesystem. This was on Ubuntu 10.10 so YMMV. I found it very sensitive with any sort of power failure or system hang and not as capable of recovering from this as the various native journal file systems on Linux.

0

There is opensolaris, and it's ZFS implementation is the sun's implementation I guess. You can always try that. I don't think you'll be able to run VirtualBox on it though. But check it out, I may be wrong. Or, you can virtualize Solaris with disk attributed to the VM. Of course performance won't be excellent but it seems you got time to try out weird setups...