[wplug] Hardware RAID tuning

Matthew Zwier mczwier at gmail.com
Thu Jun 16 12:32:20 EDT 2011


Ah, thanks.  So I'm not saturating my bus.

On Thu, Jun 16, 2011 at 11:31 AM, Martin James Gehrke
<martin at teamgehrke.com> wrote:
>
> from: http://www.directron.com/expressguide.html
> way more than you are using.
> in a single direction PCI-E 4x has 2000MB/s
>
>
>
>
> On Thu, Jun 16, 2011 at 11:02 AM, Matthew Zwier <mczwier at gmail.com> wrote:
>>
>> And by PCI I mean PCI Express.
>>
>> On Thu, Jun 16, 2011 at 11:00 AM, Matthew Zwier <mczwier at gmail.com> wrote:
>> > On Wed, Jun 15, 2011 at 10:17 PM, Drew from Zhrodague
>> > <drewzhrodague at zhrodague.net> wrote:
>> >>        Shouldn't you not share controllers with other arrays, and
>> >> actually use
>> >> multiple controllers per array? This is how we did things back in the
>> >> day with old JBOD Suns -- stripe across disks and stripe across
>> >> controllers. Modern architectures turn everything on their ears, so I
>> >> could be missing something.
>> >
>> > I doubt you're missing something, but we're out of slots in the
>> > server, and out of space in the rack for a new one :)  Did I mention
>> > this is a scientific computing cluster for a small research group?
>> > Funds and space are limited.  Also, I can't correlate the performance
>> > drop to a specific load on the other array.
>> >
>> >>        If it helps, I've found that a straight scp in Amazon's cloud is
>> >> faster
>> >> than an scp -C or compressed rsync. The results may be different with a
>> >> hardware environment, but testing is the only way to be sure.
>> >
>> > Yeah, tested this yesterday.  Compression slows things down
>> > considerably, Enumerating files (rsync) slows things down to kilobytes
>> > per second.  A typical user has a million or so tiny files and a few
>> > hundred (10 GB - 200 GB) very large files.  xfsdump over an
>> > uncompressed ssh connection appears the way to go.
>> >
>> > Interestingly, piping things through bar to measure throughput seems
>> > to slow things down a *lot*, and in fact may be the greater reason for
>> > the intermittent throughput -- I'm sustaining 4 MB/s - 50 MB/s,
>> > seemingly correlated to what set of files xfsdump is working on, now
>> > that I don't have bar in the pipe.  I'm wondering if I need to do
>> > something *really* dumb, like xfsdump -J - | xfsrestore -J -
>> > /some/nfs/mount.  That would seem to be going through one fewer
>> > userland buffers.
>> >
>> > Hmm...what's the bandwidth of a PCI bus?
>> >
>> > MZ
>> >
>> _______________________________________________
>> wplug mailing list
>> wplug at wplug.org
>> http://www.wplug.org/mailman/listinfo/wplug
>
>
> _______________________________________________
> wplug mailing list
> wplug at wplug.org
> http://www.wplug.org/mailman/listinfo/wplug
>
>


More information about the wplug mailing list