[wplug] Hardware RAID tuning

Martin James Gehrke martin at teamgehrke.com
Tue Jun 14 14:47:37 EDT 2011


MZ,

Hard disks, Linux, RAID, server performance tuning :
http://www.fccps.cz/download/adv/frr/hdd/hdd.html
You can also email the question to lopsa-discus or jump onto IRC

Are you doing incremental backups using xfsdump?

what are you dumping to?

Martin





On Tue, Jun 14, 2011 at 2:34 PM, Matthew Zwier <mczwier at gmail.com> wrote:

> Hi Martin,
>
> Full backup using xfsdump.  I've tried it over the (gigabit) network
> (via SSH, into a file on a remote server) and also straight into
> /dev/null; numbers are identical.  The drop in throughput appears to
> be due to writes to another array on the same HBA channel, but that's
> been hard to isolate.
>
> Our controllers don't support RAID6, so RAID5 is the best I can do
> with what I have.  No IOZone benchmarks, since I haven't had any
> trouble with performance until now.  Controller cache size is 256 MB.
> Connection is 3Gbps over a 4X cable.
>
> MZ
>
> On Tue, Jun 14, 2011 at 2:22 PM, Martin James Gehrke
> <martin at teamgehrke.com> wrote:
> > Matt Z.,
> > Could you be more specific about your backup mechanism? Are you doing a
> full
> > backup? Over the network? Using NFS/SCP/RSYNC/RDIFF?
> > 1. any indiciation why the drop from 40MB/s to 2MB/s
> > 2. is the backup online? are jobs writing while you are reading?
> > Sequential read should get good performance on a raid5, if you are
> writing
> > and reading at the same time, one will suffer (usually writing).
> > any benchmarking with IOZone?
> > what is the controller cache size?
> > Side notes:
> > why raid5? We've completely phased out raid5 in favor of raid6.
> > Martin Gehrke
> > --------------------------
> > Calling all Sysadmins. Be one of the first members of a new Syadmin group
> in
> > Pittsburgh.
> > SNAPGH System and Network Administrators of Pittsburgh
> >
> > On Tue, Jun 14, 2011 at 2:00 PM, Matthew Zwier <mczwier at gmail.com>
> wrote:
> >>
> >> Hi all,
> >>
> >> I'm the systems administrator for a relatively small (~20 nodes, ~320
> >> cores) scientific computing cluster with relatively large (20 TB)
> >> storage needs.  We have a couple of RAID5 arrays on a Dell PERC/5E
> >> (aka LSI MegaRAID) controller , both running XFS filesystems, and
> >> while performance is generally acceptable, it appears I can't get a
> >> backup in under five days for our 11 TB array.  That leads me to a
> >> couple of questions:
> >>
> >> 1)  That translates to about a 40 MB/s sustained read, with frequent
> >> lengthy drops to around 2 MB/s (this using xfsdump into /dev/null, for
> >> the moment).  For those of you with more experience than I...is that
> >> typical performance for a filesystem dump?  Pleasantly fast?
> >> Unacceptably slow?
> >>
> >> 2)  Does anyone know of documentation about how to go about tuning an
> >> on-line hardware RAID array in Linux, specifically for file service?
> >> About all I can find are discussions about how to optimize MySQL
> >> performance, or tips on what parameters in /sys to tweak while piping
> >> zeros directly to /dev/sdb using dd, and the like.  I can't find any
> >> documentation on how various hardware/kernel/filesystem parameters
> >> interact.  The three-way optimization problem among RAID controller
> >> settings (i.e. read-ahead), disk- and controller-specific kernel
> >> settings (TCQ depth, read-ahead), and I/O-scheduler-specific settings
> >> (noop vs. deadline vs. cfq, queue size, etc) is just killing me.
> >>
> >> Matt Z.
> >> _______________________________________________
> >> wplug mailing list
> >> wplug at wplug.org
> >> http://www.wplug.org/mailman/listinfo/wplug
> >
> >
> > _______________________________________________
> > wplug mailing list
> > wplug at wplug.org
> > http://www.wplug.org/mailman/listinfo/wplug
> >
> >
> _______________________________________________
> wplug mailing list
> wplug at wplug.org
> http://www.wplug.org/mailman/listinfo/wplug
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.wplug.org/pipermail/wplug/attachments/20110614/c13a7f08/attachment.html 


More information about the wplug mailing list