[wplug] Hardware RAID tuning

Matthew Zwier mczwier at gmail.com
Tue Jun 14 14:00:42 EDT 2011


Hi all,

I'm the systems administrator for a relatively small (~20 nodes, ~320
cores) scientific computing cluster with relatively large (20 TB)
storage needs.  We have a couple of RAID5 arrays on a Dell PERC/5E
(aka LSI MegaRAID) controller , both running XFS filesystems, and
while performance is generally acceptable, it appears I can't get a
backup in under five days for our 11 TB array.  That leads me to a
couple of questions:

1)  That translates to about a 40 MB/s sustained read, with frequent
lengthy drops to around 2 MB/s (this using xfsdump into /dev/null, for
the moment).  For those of you with more experience than I...is that
typical performance for a filesystem dump?  Pleasantly fast?
Unacceptably slow?

2)  Does anyone know of documentation about how to go about tuning an
on-line hardware RAID array in Linux, specifically for file service?
About all I can find are discussions about how to optimize MySQL
performance, or tips on what parameters in /sys to tweak while piping
zeros directly to /dev/sdb using dd, and the like.  I can't find any
documentation on how various hardware/kernel/filesystem parameters
interact.  The three-way optimization problem among RAID controller
settings (i.e. read-ahead), disk- and controller-specific kernel
settings (TCQ depth, read-ahead), and I/O-scheduler-specific settings
(noop vs. deadline vs. cfq, queue size, etc) is just killing me.

Matt Z.


More information about the wplug mailing list