Mirror seeing half the write IOPS on one disk than the other, is this normal?

I’m syncoiding from my normal RAIDz2 to a backup mirror made of 2 disks. I looked at zpool iostat and I noticed that one of the disks consistently shows less than half the write IOPS of the other:

                                        capacity     operations     bandwidth 
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
storage-volume-backup                 5.03T  11.3T      0    867      0   330M
  mirror-0                            5.03T  11.3T      0    867      0   330M
    wwn-0x5000c500e8736faf                -      -      0    212      0   164M
    wwn-0x5000c500e8737337                -      -      0    654      0   165M

This is also evident in iostat:

     f/s f_await  aqu-sz  %util Device
    0.00    0.00    3.48  46.2% sda
    0.00    0.00    8.10  99.7% sdb

The difference is also evident in the temperatures of the disks. The busier disk is 4 degrees warmer than the other. The disks are identical on paper and bought at the same time.

Is this behaviour expected?

Looks like a failing drive from here. It could take another five years of acting exactly like that before it fails catastrophically, mind you… or it could fail catastrophically before I’m done typing this.

I’d replace it. Life’s too short to tolerate deranged gear.

3 Likes

So not expected. Someone on Lemmy suggested it could be staturated common link. So I tried switching the SATA cables, no difference. Now I placed the disk showing lower IOPS in a known good USB 3 enclosure. Lo and behold, now the picture is flipped:

                                        capacity     operations     bandwidth 
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
storage-volume-backup                 12.6T  3.74T      0    563      0   293M
  mirror-0                            12.6T  3.74T      0    563      0   293M
    wwn-0x5000c500e8736faf                -      -      0    406      0   146M
    wwn-0x5000c500e8737337                -      -      0    156      0   146M

Perhaps it is link-related? I’ve never used this SATA controller before. It’s an old AMD B350 chipset SATA controller. I’m not looking for a definitive answer. I’m just trying to figure out whether this is expected ZFS behaviour (seems like no) and whether I could remove reasonable suspicion from the disks. Maybe I can’t. I also wonder how would I tell which disk is deranged. Perhaps load testing each disk with fio, outside of ZFS.

BTW the disks are new Exos X22. Less than 24 hours on them.

I observe you’re getting the same bandwidth on both disks (though less overall with the USB enclosure), just that one or the other disk is doing more iops to accomplish the same bandwidth. Is less happening per iop? What does zpool iostat -rv look like?

Do you expect your known good USB3 enclosure to perform better than the SATA? Maybe you gave the good disk a handicap! You said you switched the cables: did you also switch the ports?

1 Like

The iostat snapshots posted were taken at different points in time. I was also curious at the lower total bandwidth when one disk was on USB so I put it back on SATA for a bit and the overall bandwidth did not increase. I’m thinking it was lower due to the disks being further along the platters where their bandwidth is lower.

I tested both disks using the GNOME Disks benchmark prior to using them with ZFS. Both disks showed similar sequential read speed and similar latency (that’s all this benchmark tests). I retested over USB later and the results looked the same.

When trying USB, I moved the low-IOPS disk to USB where it became the high-IOPS disk.

Looked at -rv too but don’t have the data anymore since the workload is over. Both disks were receiving large ops but the high-IOPS disk was receiving a few less large ops and a bunch of 128K ops. The low-IOPS disk was receiving no 128K ops or very few. When low-IOPS disk became the high-IOPS disk the same behaviour transferred to it. 128K ops got sent to it, while almost no 128K ops sent to the other.

Honestly I don’t know what to expect from this SATA controller since I’ve never used it. The USB3 enclosure on the other hand has been clocked maxing out other disks before - close to 300MB/s if I remember correctly. So while I’m not expecting it to be faster - it’s 5Gbps USB3 while SATA is 6Gbps - I don’t expect it to be a handicap. But of course that’s not impossible and a solid hypothesis.

I did swap the SATA ports, yes, I should have said that.

Unfortunately the send/receive to the backup pool is done and I’m already transferring data from it to a newly rebuilt main pool so I can’t get any more data. Now there’s read load on the backup pool under question and it seems perfect:

                               capacity     operations     bandwidth 
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
storage-volume-backup        16.0T   373G    322     31   212M   355K
  mirror-0                   16.0T   373G    322     31   212M   355K
    wwn-0x5000c500e8736faf       -      -    161     15   106M   178K
    wwn-0x5000c500e8737337       -      -    160     15   107M   178K

No visible imbalance in IOPS or bandwidth. Both disks are on SATA now.

Once this is done, I’m gonna destroy it and benchmark each of the two drives indepentently with fio for sequential and random r/w. If one disk has significant enough problems I reckon it should show. Thoughts?

Alright, to cap this thing off, I destroyed the mirror and ran some benchmarks on the two disks. As far as I can tell they’re performing the same. For now I’m chalking the IOPS strangeness during ZFS send/receive to the software or links. Won’t be RMAing any of disks.

Here’s the results:

Sequential writes

sudo fio --name=write_throughput --directory=$TEST_DIR --numjobs=16 \
--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \
--group_reporting=1 --iodepth_batch_submit=64 \
--iodepth_batch_complete_max=64

wwn-0x5000c500e8737337

write_throughput: (groupid=0, jobs=16): err= 0: pid=63995: Tue Sep 10 00:20:35 2024
  write: IOPS=237, BW=254MiB/s (267MB/s)(15.3GiB/61593msec); 0 zone resets
    slat (msec): min=17, max=5282, avg=2700.84, stdev=1262.36
    clat (usec): min=3, max=5282.9k, avg=907420.53, stdev=1341910.28
     lat (msec): min=168, max=9916, avg=3573.74, stdev=1438.61
    clat percentiles (usec):
     |  1.00th=[      6],  5.00th=[      7], 10.00th=[      7],
     | 20.00th=[      8], 30.00th=[      9], 40.00th=[     10],
     | 50.00th=[     20], 60.00th=[ 429917], 70.00th=[1098908],
     | 80.00th=[2055209], 90.00th=[3338666], 95.00th=[3976201],
     | 99.00th=[4596958], 99.50th=[4731175], 99.90th=[4999611],
     | 99.95th=[5133829], 99.99th=[5268046]
   bw (  MiB/s): min= 1702, max= 1996, per=100.00%, avg=1912.06, stdev= 5.10, samples=247
   iops        : min= 1698, max= 1996, avg=1911.61, stdev= 5.17, samples=247
  lat (usec)   : 4=0.03%, 10=46.97%, 20=3.23%
  lat (msec)   : 20=0.37%, 50=0.35%, 100=1.45%, 250=4.15%, 500=4.53%
  lat (msec)   : 750=4.51%, 1000=3.54%, 2000=10.40%, >=2000=20.79%
  cpu          : usr=0.04%, sys=0.08%, ctx=14765, majf=0, minf=932
  IO depths    : 1=0.0%, 2=0.4%, 4=3.9%, 8=15.3%, 16=29.7%, 32=49.4%, >=64=0.4%
     submit    : 0=0.0%, 4=4.8%, 8=4.4%, 16=11.6%, 32=28.6%, 64=50.6%, >=64=0.0%
     complete  : 0=0.0%, 4=0.8%, 8=0.4%, 16=0.8%, 32=0.0%, 64=98.1%, >=64=0.0%
     issued rwts: total=0,14649,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=254MiB/s (267MB/s), 254MiB/s-254MiB/s (267MB/s-267MB/s), io=15.3GiB (16.4GB), run=61593-61593msec

Disk stats (read/write):
  sdb: ios=0/16274, merge=0/82, ticks=0/3863103, in_queue=3884040, util=97.75%

wwn-0x5000c500e8736faf

write_throughput: (groupid=0, jobs=16): err= 0: pid=52318: Tue Sep 10 00:15:53 2024
  write: IOPS=214, BW=231MiB/s (242MB/s)(14.1GiB/62759msec); 0 zone resets
    slat (msec): min=16, max=6562, avg=3029.51, stdev=1307.95
    clat (usec): min=3, max=6563.0k, avg=1022632.46, stdev=1508225.25
     lat (msec): min=199, max=9641, avg=3975.68, stdev=1408.43
    clat percentiles (usec):
     |  1.00th=[      6],  5.00th=[      7], 10.00th=[      7],
     | 20.00th=[      8], 30.00th=[      9], 40.00th=[     10],
     | 50.00th=[  69731], 60.00th=[ 375391], 70.00th=[1182794],
     | 80.00th=[2533360], 90.00th=[3674211], 95.00th=[4328522],
     | 99.00th=[4865393], 99.50th=[4999611], 99.90th=[5133829],
     | 99.95th=[5133829], 99.99th=[6543115]
   bw (  MiB/s): min= 1725, max= 1988, per=100.00%, avg=1899.92, stdev= 4.38, samples=228
   iops        : min= 1722, max= 1988, avg=1899.46, stdev= 4.43, samples=228
  lat (usec)   : 4=0.25%, 10=42.09%, 20=7.19%
  lat (msec)   : 100=1.98%, 250=5.18%, 500=5.50%, 750=4.91%, 1000=1.86%
  lat (msec)   : 2000=8.66%, >=2000=22.81%
  cpu          : usr=0.03%, sys=0.07%, ctx=13735, majf=0, minf=937
  IO depths    : 1=0.0%, 2=0.0%, 4=4.3%, 8=13.8%, 16=28.6%, 32=50.9%, >=64=1.0%
     submit    : 0=0.0%, 4=5.3%, 8=4.1%, 16=13.2%, 32=26.7%, 64=50.7%, >=64=0.0%
     complete  : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.8%, 32=0.8%, 64=97.2%, >=64=0.0%
     issued rwts: total=0,13443,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=231MiB/s (242MB/s), 231MiB/s-231MiB/s (242MB/s-242MB/s), io=14.1GiB (15.2GB), run=62759-62759msec

Disk stats (read/write):
  sda: ios=0/14964, merge=0/2791, ticks=0/4004995, in_queue=4029034, util=95.58%

Random writes

 sudo fio --name=write_iops --directory=$TEST_DIR --size=10G \
--time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
--verify=0 --bs=4K --iodepth=256 --rw=randwrite --group_reporting=1  \
--iodepth_batch_submit=256  --iodepth_batch_complete_max=256

wwn-0x5000c500e8737337

write_iops: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [w(1)][100.0%][w=3072KiB/s][w=768 IOPS][eta 00m:00s]
write_iops: (groupid=0, jobs=1): err= 0: pid=68916: Tue Sep 10 00:23:21 2024
  write: IOPS=737, BW=2969KiB/s (3040kB/s)(174MiB/60152msec); 0 zone resets
    slat (usec): min=20, max=1650.8k, avg=187540.17, stdev=241256.65
    clat (usec): min=3, max=1792.0k, avg=120258.48, stdev=242532.26
     lat (msec): min=15, max=1976, avg=307.65, stdev=333.98
    clat percentiles (usec):
     |  1.00th=[      6],  5.00th=[      8], 10.00th=[     10],
     | 20.00th=[     12], 30.00th=[     15], 40.00th=[  15926],
     | 50.00th=[  55837], 60.00th=[  86508], 70.00th=[ 124257],
     | 80.00th=[ 227541], 90.00th=[ 235930], 95.00th=[ 252707],
     | 99.00th=[1568670], 99.50th=[1635779], 99.90th=[1769997],
     | 99.95th=[1769997], 99.99th=[1769997]
   bw (  KiB/s): min= 1536, max= 9320, per=100.00%, avg=3700.23, stdev=1572.08, samples=96
   iops        : min=  384, max= 2330, avg=924.98, stdev=393.00, samples=96
  lat (usec)   : 4=0.01%, 10=15.30%, 20=18.69%, 100=1.75%, 250=0.29%
  lat (msec)   : 4=0.31%, 10=2.90%, 20=2.66%, 50=8.17%, 100=13.42%
  lat (msec)   : 250=31.60%, 500=2.85%, 750=0.14%, 2000=2.48%
  cpu          : usr=0.05%, sys=1.09%, ctx=5523, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=1.2%, >=64=98.6%
     submit    : 0=0.0%, 4=2.5%, 8=0.2%, 16=3.2%, 32=7.7%, 64=15.2%, >=64=71.1%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.4%, >=64=99.6%
     issued rwts: total=0,44389,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
  WRITE: bw=2969KiB/s (3040kB/s), 2969KiB/s-2969KiB/s (3040kB/s-3040kB/s), io=174MiB (183MB), run=60152-60152msec

Disk stats (read/write):
  sdb: ios=0/46937, merge=0/3477, ticks=0/3754324, in_queue=3770960, util=96.57%

wwn-0x5000c500e8736faf

write_iops: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [w(1)][100.0%][w=3900KiB/s][w=975 IOPS][eta 00m:00s]
write_iops: (groupid=0, jobs=1): err= 0: pid=71181: Tue Sep 10 00:24:28 2024
  write: IOPS=722, BW=2908KiB/s (2978kB/s)(171MiB/60086msec); 0 zone resets
    slat (usec): min=23, max=1669.6k, avg=194775.19, stdev=259759.10
    clat (usec): min=3, max=1679.2k, avg=116068.16, stdev=233613.94
     lat (msec): min=10, max=1906, avg=310.91, stdev=339.52
    clat percentiles (usec):
     |  1.00th=[      6],  5.00th=[      8], 10.00th=[     10],
     | 20.00th=[     12], 30.00th=[     16], 40.00th=[  14615],
     | 50.00th=[  46924], 60.00th=[  83362], 70.00th=[ 123208],
     | 80.00th=[ 229639], 90.00th=[ 246416], 95.00th=[ 256902],
     | 99.00th=[1518339], 99.50th=[1568670], 99.90th=[1686111],
     | 99.95th=[1686111], 99.99th=[1686111]
   bw (  KiB/s): min= 1536, max= 9296, per=100.00%, avg=3658.20, stdev=1553.55, samples=95
   iops        : min=  384, max= 2324, avg=914.47, stdev=388.39, samples=95
  lat (usec)   : 4=0.01%, 10=13.83%, 20=21.72%, 50=0.35%, 100=0.62%
  lat (msec)   : 10=2.82%, 20=3.40%, 50=8.27%, 100=13.85%, 250=27.50%
  lat (msec)   : 500=5.43%, 2000=2.31%
  cpu          : usr=0.05%, sys=1.10%, ctx=5376, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=1.8%, >=64=98.5%
     submit    : 0=0.0%, 4=2.3%, 8=1.8%, 16=3.6%, 32=6.6%, 64=13.8%, >=64=71.9%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.5%, >=64=99.5%
     issued rwts: total=0,43425,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
  WRITE: bw=2908KiB/s (2978kB/s), 2908KiB/s-2908KiB/s (2978kB/s-2978kB/s), io=171MiB (179MB), run=60086-60086msec

Disk stats (read/write):
  sda: ios=0/46286, merge=0/3621, ticks=0/3733442, in_queue=3750763, util=96.85%

Sequential reads

sudo fio --name=read_throughput --directory=$TEST_DIR --numjobs=16 \
--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read \
--group_reporting=1 \
--iodepth_batch_submit=64 --iodepth_batch_complete_max=64

wwn-0x5000c500e8737337

read_throughput: (groupid=0, jobs=16): err= 0: pid=103355: Tue Sep 10 00:40:41 2024
  read: IOPS=222, BW=239MiB/s (251MB/s)(14.4GiB/61806msec)
    slat (msec): min=10, max=5329, avg=2927.82, stdev=1141.97
    clat (usec): min=4, max=6053.7k, avg=941408.86, stdev=1399925.15
     lat (msec): min=582, max=10782, avg=3770.82, stdev=1314.25
    clat percentiles (usec):
     |  1.00th=[      6],  5.00th=[      7], 10.00th=[      7],
     | 20.00th=[      8], 30.00th=[      8], 40.00th=[      9],
     | 50.00th=[     10], 60.00th=[  55837], 70.00th=[1400898],
     | 80.00th=[2231370], 90.00th=[3338666], 95.00th=[3942646],
     | 99.00th=[4798284], 99.50th=[4999611], 99.90th=[5804917],
     | 99.95th=[6006244], 99.99th=[6006244]
   bw (  MiB/s): min= 1652, max= 2017, per=100.00%, avg=1924.18, stdev= 6.50, samples=234
   iops        : min= 1646, max= 2016, avg=1923.11, stdev= 6.60, samples=234
  lat (usec)   : 10=54.09%, 20=5.32%
  lat (msec)   : 20=0.38%, 50=0.35%, 100=0.46%, 250=1.79%, 500=0.68%
  lat (msec)   : 750=1.35%, 1000=1.26%, 2000=12.12%, >=2000=22.71%
  cpu          : usr=0.00%, sys=0.12%, ctx=21767, majf=0, minf=934
  IO depths    : 1=0.0%, 2=0.5%, 4=6.5%, 8=19.5%, 16=36.3%, 32=35.8%, >=64=0.5%
     submit    : 0=0.0%, 4=2.9%, 8=6.0%, 16=10.7%, 32=30.1%, 64=50.2%, >=64=0.0%
     complete  : 0=0.0%, 4=1.2%, 8=0.8%, 16=0.4%, 32=0.4%, 64=97.2%, >=64=0.0%
     issued rwts: total=13767,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=239MiB/s (251MB/s), 239MiB/s-239MiB/s (251MB/s-251MB/s), io=14.4GiB (15.5GB), run=61806-61806msec

Disk stats (read/write):
  sdb: ios=23356/31, merge=104/9, ticks=3959183/6018, in_queue=3968175, util=99.59%

wwn-0x5000c500e8736faf

read_throughput: (groupid=0, jobs=16): err= 0: pid=110961: Tue Sep 10 00:45:03 2024
  read: IOPS=208, BW=225MiB/s (236MB/s)(13.6GiB/61959msec)
    slat (msec): min=15, max=5669, avg=3231.10, stdev=1235.85
    clat (usec): min=4, max=6153.8k, avg=960136.03, stdev=1497047.96
     lat (msec): min=546, max=10955, avg=4070.84, stdev=1335.92
    clat percentiles (usec):
     |  1.00th=[      6],  5.00th=[      7], 10.00th=[      7],
     | 20.00th=[      8], 30.00th=[      9], 40.00th=[      9],
     | 50.00th=[     10], 60.00th=[     11], 70.00th=[1400898],
     | 80.00th=[2332034], 90.00th=[3439330], 95.00th=[4177527],
     | 99.00th=[4999611], 99.50th=[5268046], 99.90th=[5670700],
     | 99.95th=[5670700], 99.99th=[6006244]
   bw (  MiB/s): min= 1659, max= 2015, per=100.00%, avg=1931.25, stdev= 6.68, samples=221
   iops        : min= 1654, max= 2014, avg=1930.08, stdev= 6.77, samples=221
  lat (usec)   : 10=56.28%, 20=7.91%
  lat (msec)   : 100=0.04%, 250=1.73%, 500=1.15%, 750=0.43%, 1000=1.51%
  lat (msec)   : 2000=5.95%, >=2000=25.43%
  cpu          : usr=0.00%, sys=0.11%, ctx=17710, majf=0, minf=933
  IO depths    : 1=0.0%, 2=2.5%, 4=7.4%, 8=22.8%, 16=39.6%, 32=25.2%, >=64=0.5%
     submit    : 0=0.0%, 4=5.2%, 8=5.5%, 16=11.8%, 32=27.0%, 64=50.5%, >=64=0.0%
     complete  : 0=0.0%, 4=1.7%, 8=2.1%, 16=0.4%, 32=0.0%, 64=95.9%, >=64=0.0%
     issued rwts: total=12940,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=225MiB/s (236MB/s), 225MiB/s-225MiB/s (236MB/s-236MB/s), io=13.6GiB (14.6GB), run=61959-61959msec

Disk stats (read/write):
  sda: ios=18930/187, merge=84/15, ticks=3978203/30482, in_queue=4013718, util=98.65%

Random reads

sudo fio --name=read_iops --directory=$TEST_DIR --size=10G \
--time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
--verify=0 --bs=4K --iodepth=256 --rw=randread --group_reporting=1 \
--iodepth_batch_submit=256  --iodepth_batch_complete_max=256

wwn-0x5000c500e8737337

read_iops: (groupid=0, jobs=1): err= 0: pid=124983: Tue Sep 10 00:51:13 2024
  read: IOPS=742, BW=2984KiB/s (3056kB/s)(176MiB/60299msec)
    slat (usec): min=10, max=396967, avg=193764.08, stdev=67633.93
    clat (usec): min=3, max=840200, avg=116558.26, stdev=119169.41
     lat (msec): min=28, max=1136, avg=310.28, stdev=116.58
    clat percentiles (usec):
     |  1.00th=[     7],  5.00th=[     8], 10.00th=[    10], 20.00th=[    13],
     | 30.00th=[    16], 40.00th=[ 20579], 50.00th=[ 79168], 60.00th=[141558],
     | 70.00th=[242222], 80.00th=[252707], 90.00th=[263193], 95.00th=[287310],
     | 99.00th=[383779], 99.50th=[408945], 99.90th=[517997], 99.95th=[526386],
     | 99.99th=[583009]
   bw (  KiB/s): min= 1536, max= 3192, per=100.00%, avg=2986.29, stdev=380.93, samples=120
   iops        : min=  384, max=  798, avg=746.48, stdev=95.24, samples=120
  lat (usec)   : 4=0.02%, 10=14.03%, 20=20.25%, 50=0.86%, 100=0.29%
  lat (msec)   : 10=2.63%, 20=2.05%, 50=6.04%, 100=7.96%, 250=23.92%
  lat (msec)   : 500=22.19%, 750=0.19%, 1000=0.01%
  cpu          : usr=0.04%, sys=0.45%, ctx=5612, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.6%, >=64=99.4%
     submit    : 0=0.0%, 4=1.7%, 8=1.2%, 16=5.1%, 32=5.9%, 64=14.5%, >=64=71.6%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
     issued rwts: total=44800,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
   READ: bw=2984KiB/s (3056kB/s), 2984KiB/s-2984KiB/s (3056kB/s-3056kB/s), io=176MiB (184MB), run=60299-60299msec

Disk stats (read/write):
  sdb: ios=46326/28, merge=0/7, ticks=3762968/2322, in_queue=3766570, util=99.32%

wwn-0x5000c500e8736faf

read_iops: (groupid=0, jobs=1): err= 0: pid=128131: Tue Sep 10 00:52:58 2024
  read: IOPS=729, BW=2933KiB/s (3003kB/s)(173MiB/60237msec)
    slat (usec): min=14, max=413100, avg=201557.15, stdev=73844.09
    clat (usec): min=2, max=718768, avg=115156.89, stdev=122624.89
     lat (msec): min=21, max=933, avg=316.77, stdev=121.32
    clat percentiles (usec):
     |  1.00th=[     7],  5.00th=[     9], 10.00th=[    10], 20.00th=[    12],
     | 30.00th=[    15], 40.00th=[ 21627], 50.00th=[ 66847], 60.00th=[129500],
     | 70.00th=[231736], 80.00th=[261096], 90.00th=[274727], 95.00th=[304088],
     | 99.00th=[404751], 99.50th=[417334], 99.90th=[513803], 99.95th=[541066],
     | 99.99th=[633340]
   bw (  KiB/s): min= 1568, max= 3198, per=99.83%, avg=2928.58, stdev=583.56, samples=120
   iops        : min=  392, max=  799, avg=732.06, stdev=145.87, samples=120
  lat (usec)   : 4=0.09%, 10=12.95%, 20=24.34%, 50=0.31%
  lat (msec)   : 10=0.32%, 20=0.65%, 50=7.64%, 100=10.24%, 250=16.18%
  lat (msec)   : 500=27.30%, 750=0.13%
  cpu          : usr=0.04%, sys=0.44%, ctx=5522, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=4.1%, >=64=95.6%
     submit    : 0=0.0%, 4=1.3%, 8=1.5%, 16=4.8%, 32=5.9%, 64=14.5%, >=64=72.0%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.4%, >=64=99.6%
     issued rwts: total=43914,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
   READ: bw=2933KiB/s (3003kB/s), 2933KiB/s-2933KiB/s (3003kB/s-3003kB/s), io=173MiB (181MB), run=60237-60237msec

Disk stats (read/write):
  sda: ios=45430/224, merge=0/15, ticks=3740766/13102, in_queue=3756451, util=98.88%