[wplug] Hardware RAID tuning

Matthew Zwier mczwier at gmail.com
Fri Jun 17 12:03:43 EDT 2011


Thanks all for your help on this.  I think I've finally run this to
the ground.  For the archives, I'll summarize what I've learned.

The culprit appears entirely to be the contents of the filesystem I'm
trying to back up.  When I tried appropriate tests, sequential read
rate was very good (250 MB/s), and random read rate was acceptable (40
MB/s - 80 MB/s depending on file size) and well correlated to the
number of spindles in each array.  The controller and I/O scheduler
were not at fault.  In fact, I got identical behavior for the cfq and
deadline I/O schedulers.

Upon closer examination of the filesystem I was attempting to dump,
two users were responsible for 24 million files less than 4k in size.
 Since xfsdump dumps files (and not some lower level entity like
blocks or extents), lots of tiny files would slow the dump process to
a crawl.  Hitting a large group of files, I'd get about 800 KB/s dumps
for the tiniest files, 4 MB/s for small, and 40 MB/s for average-sized
files.

I was piping the dump through SSH to get it to the backup server.
Though I wasn't maxing out a CPU with ssh, something about how SSH was
buffering (or perhaps suboptimal socket options) appeared to be
limiting my throughput to about 80 MB/s.  Switching to NFS got me to a
peak throughput of 128 MB/s, which is all the faster my gigabit
network with 4000 byte frames can push data.  Using "bar" to measure
throughput (either through SSH or straight to /dev/null) was even
worse, and tended to limit throughput to about 40 MB/s.

So, in summary -- I jumped to the conclusion that I had a
mis-configured storage system, when in fact I had a pathologically
difficult-to-backup filesystem, and my tendency to use metrics like
network throughput over SSH -- or bandwidth through "bar" -- to
evaluate my disks' performance led me to false conclusions about the
disks' performance in general.  I'll be using "dd" from now on, and
not trying to tweak an array that ain't broken.

Again...thanks for all your help!
Matt Z.


More information about the wplug mailing list