Discussion:
[rsnapshot-discuss] HIGH iowait on our SOURCE
Thierry Lavallee via rsnapshot-discuss
2017-06-21 15:43:31 UTC
Permalink
Hi,
It seems our rsnapshot is creating HIGH iowait on our SOURCE server. Is
there any way to leverage this a bit?
Thanks

sar -e 05:00:00
12:00:01 AM CPU %user %nice %system %iowait %steal %idle
12:10:01 AM all 3.64 0.15 0.59 1.57 0.00 94.05
12:20:01 AM all 3.93 0.11 0.65 1.52 0.00 93.80
12:30:02 AM all 4.26 0.11 0.58 1.13 0.00 93.92
12:40:01 AM all 3.86 0.13 0.59 1.00 0.00 94.42
12:50:01 AM all 4.12 0.11 0.62 1.19 0.00 93.96
01:00:01 AM all 3.61 0.11 0.49 0.74 0.00 95.05
01:10:01 AM all 4.35 0.13 0.64 0.80 0.00 94.09
01:20:01 AM all 4.72 0.12 0.71 1.56 0.00 92.89
01:30:01 AM all 4.93 0.11 0.66 1.00 0.00 93.29
01:40:01 AM all 4.31 0.13 0.70 0.97 0.00 93.90
01:50:02 AM all 4.11 0.11 0.67 0.92 0.00 94.18
02:00:01 AM all 3.89 0.11 0.56 0.73 0.00 94.72
02:10:01 AM all 4.49 0.14 0.71 0.94 0.00 93.71
02:20:01 AM all 4.29 0.11 0.68 0.88 0.00 94.03
02:30:01 AM all 3.31 0.11 0.47 0.74 0.00 95.37
02:40:01 AM all 3.76 0.13 0.61 0.87 0.00 94.63
02:50:01 AM all 3.51 0.11 0.57 0.84 0.00 94.98
03:00:01 AM all 3.84 0.11 0.51 0.75 0.00
94.80 - RSNAPSHOT START
03:10:02 AM all 6.60 0.14 1.03 _*4.18*___ 0.00
88.05 - RSNAPSHOT working
03:20:01 AM all 3.68 0.11 0.83 _*16.19*_ 0.00
79.18 - RSNAPSHOT working
03:30:02 AM all 3.93 0.11 0.81 _*12.27*_ 0.00
82.87 - RSNAPSHOT working
03:40:02 AM all 4.20 0.12 0.73 _*15.88*_ 0.00
79.07 - RSNAPSHOT working
03:50:01 AM all 3.70 0.11 0.91 _*13.84*_ 0.00
81.44 - RSNAPSHOT working
04:00:02 AM all 3.87 0.11 0.97 _*22.54*_ 0.00
72.51 - RSNAPSHOT working
04:10:01 AM all 4.15 0.15 0.85 _*15.54*_ 0.00
79.30 - RSNAPSHOT working
04:20:01 AM all 3.25 0.11 0.62 _*5.44*_ 0.00
90.58 - RSNAPSHOT END
04:30:01 AM all 3.38 0.11 0.52 2.47 0.00 93.53
Andy Smith
2017-06-21 21:43:41 UTC
Permalink
Hello,
Post by Thierry Lavallee via rsnapshot-discuss
It seems our rsnapshot is creating HIGH iowait on our SOURCE server. Is
there any way to leverage this a bit?
I don't think you have a lot of options besides beefing up the
hardware.

The problem most likely is that the system is having to do
a lot of seeks because files are all over the place on the disks.
Particularly if the filesystem had got full at any time and
fragmentation has occurred, you don't get to do a sequential read
when reading a whole directory tree.

You could verify that is the case by looking at "iostat -x 1" and
seeing what the rkB/s figure is. If you're doing a lot of IO but
seeing low rkB/s then likely a lot of seeking is happening.

If you can prove significant fragmentation (e.g. by running filefrag
on a bunch of the files and seeing how many extents they are
composed of) the possibly defragmenting your filesystem could help
you.

If you have lots of spare RAM then prefetching it all into buffer
cache could help you.

Other than that I think you need to add more hardware to decrease
your seek latency.

Cheers,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Loading...