I recently formatted a 20TB array as XFS. For file transfers < 10G performance is fantastic. However once I exceed 10G disk I/O decays logarithmically. I have the journal on the same disk. When I look at the disk IO usage, I see that jbd2flush is using nearly all disk IO. How can I resolve this? (Any way to do it w/o removing the data?) I am running Ubuntu 10.10. Thank you.
3 Answers
The journal is usually never the performance problem except for small random write IO, as the log is a circular buffer that is access sequentially as the log just gets appended.
How much memory does the system have? Whate are the watermarks on the pdflush side of things?
Can you produce a throughput diagram ie plot some of the columns of iostat -x -k 1 with gnuplot or something like that for the <10GB and for > 10GB cases?
- 5,728
To reduce contention under high load, I would put the journal on a separate disk. Easiest would have been one disk from the array itself but probably you cannot do this anymore without backup restore. Setting a dedicated log device is explained in the tune2fs(8) manpage under the option -J.
- 333
Is your XFS file system aligned to stripe size? This may not be directly related to your performance issue but it definitely improves performance. Please check this thread for some general ideas and samples.