4

Made an upgrade of RAID firmware and latency as it seems have increased.

Could you please advise on the most accurate way to examine disk read and write latencies?

forker
  • 151

6 Answers6

4

IOzone great benchmark.

Run Random Read, Random Write, Random Mix test to check latencies.

ooshro
  • 11,502
4

I would suggest using fio as I described in answer https://askubuntu.com/a/991311/50254

The fio allows you to get more info about the distribution of latencies for different work loads (anything between single threaded huge sequential writes QD32 and mixed random read and write 4k QD1 with multiple threads).

I think that if you assume that hardware has higher latency than previously, you should test single threaded random 4k read QD1 (this is the most latency sensitive operation that I can think):

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

That will create test file fio-tempfile.dat in the current working directory so make sure that you first change to device you want to test. (If you want to test raw device performance instead of performance with a file system, you can use raw device as the filename. You're going to lose current contents of said device, though.)

You can also test the same thing with writing instead of reading data:

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randwrite --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

If you see huge increase in latency with write you can be pretty sure that old firmware allowed write back cache and the new firmware does not allow that. Unless you have battery backed RAID controller or correctly configured UPS, you do not want write back cache if you consider your data important.

2

The common tool in Linux hard disk tuning and basic performance monitoring is hdparm.

mlissner
  • 1,130
shellholic
  • 1,365
2

you can use dd to mesure the latency of a disk

eg.

dd if=/dev/zero of=/root/testfile bs=512 count=1000 oflag=dsync
HBruijn
  • 84,206
  • 24
  • 145
  • 224
0

Disk Latency Attestation Tool

A simple yet effective tool to track disk storage latency during write operations.

This utility operates as a daemon in the RAM, generates random data and writes it as a file to the specified disk storage for a certain intervals, measuring the time it took to write a file of the same size in every reiteration.

The data cannot being cached because for each new test of writing pseudo-random data is generated.

The application generates a log in the form of a CSV table, detailing the date, time of each event, duration of dataset writing, error codes and error messages.

Project Page of Disk Latency Attestation Tool

Dennis V
  • 113
0

For testing disk latency, I use two utilities: ioping and fio

ioping is the disk I/O equivalent of ping. It is very useful for a rapid first-pass on the disk subsystem. Running ioping -C -D -G /your/dir results in the following output, where <<< means reading and >>> means writing:

4 KiB >>> /root/test/ (xfs /dev/dm-0 32.0 GiB): request=1 time=2.90 ms (warmup)
4 KiB <<< /root/test/ (xfs /dev/dm-0 32.0 GiB): request=2 time=278.4 us
4 KiB >>> /root/test/ (xfs /dev/dm-0 32.0 GiB): request=3 time=23.2 ms
4 KiB <<< /root/test/ (xfs /dev/dm-0 32.0 GiB): request=4 time=283.5 us
4 KiB >>> /root/test/ (xfs /dev/dm-0 32.0 GiB): request=5 time=6.29 ms

For a more intensive test (ie: akin to ping flood), add -R to the commands above. For testing sync writes, remove -C

fio is invaluable for in-depth analysis. Use fio --name=test --filename=/your/file --rw=randrw --size=256M --direct=1 --runtime=15 for a quick first pass. Add --sync=1 for testing sync writes. Example output:

...
read: IOPS=3708, BW=14.5MiB/s (15.2MB/s)(128MiB/8828msec)
  clat (usec): min=90, max=12403, avg=200.61, stdev=385.19
   lat (usec): min=90, max=12403, avg=200.79, stdev=385.20
...
write: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(128MiB/8828msec); 0 zone resets
  clat (usec): min=32, max=16906, avg=65.05, stdev=317.08
   lat (usec): min=32, max=16906, avg=65.33, stdev=317.09
...

Bonus point: ZFS does not supports true O_DIRECT read/writes, so non-sync writes are going to be cached. Reads will be cached unless setting primarycache=none on the testing dataset. Anyway ZFS performance testing, due to its strong reliance on ARC and other factors, is a quite complex task.

shodanshok
  • 52,255