[wplug] RAID performance

Bill Moran wmoran at potentialtech.com
Fri Mar 2 11:09:06 EST 2007


In response to Patrick Wagstrom <pwagstro at andrew.cmu.edu>:
> I'm in the process of speccing out a new machine for a research group 
> here at CMU.  Needs to be fairly beefy, handle about 1TB of MySQL 
> databases, plus room for other computations to take place.  Anyway, 
> we'll put aside all the issues of MySQL and their choice of software, 
> and instead focus on an interesting issue I noticed last night, and I'm 
> looking for some help with it, or someone to double check it.
> 
> We've got a machine right now with 4x400GB Hitachi SATA I drives (model 
> HDS724040KLSA80S) connected to a 3ware Escalade 9500S 4 port SATA RAID 
> controller (128MB of ram on the controller).  These drives are 
> structured in RAID 5.  And on the other hand, at home I've got 4x320GB 
> Seagate SATA II (model STS3320620AS) running software raid 5 on a MSI 
> K8N-Neo4 Platinum (8x SATA ports).  I went software RAID at home because 
> of cost and because performance isn't overly critical for the home 
> machine because it's just HDTV, which only needs about 2MB/s write speed 
> max.
> 
> Anyway, I did some admittedly synthetic benchmarks to compare 
> performance on the system because I wanted to get an idea how much the 
> hardware RAID made a difference, or if we'd be better off getting an 
> additional 2 or 3 500GB drives for the cost of the hardware RAID.  As a 
> final point of comparison, I included my IBM T43p laptop which as a SATA 
> harddrive inside and no RAID.  For read the speeds are averaged over 10 
> runs of hdparm.  For writes the speed is averaged over 3 consecutive 
> runs.  Here's what I found:
> 
> using hdparm -tT to get an idea of read speed:
> 
> 9500S Hardware: 52MB/s
> K8N Software: 178MB/s
> T43p No Raid: 40MB/s
> 
> Then I decided to find some large files and copy them from one location 
> on the drive to another.  This was the best I could do because I didn't 
> want external drives to be the bottleneck and the hardware raid machine 
> has all disks as part of the array:
> 
> 9500S Hardware (1.5G file): 24MB/s
> K8N Software (1.1G file): 42MB/s
> T43p No Raid (2.3G file): 11.5MB/s
> 
> So, as should be expected the laptop lags behind on just about 
> everything.  However, what I was surprised to find was that software 
> RAID 5 destroyed the hardware RAID 5 in terms of speed.  This leads me 
> to wonder about a few things and I'd like to get other peoples feedback.
> 
> Does anyone else have a 3ware 9500s running RAID 5 that they could 
> provide some useful benchmarks from?  Are there tuning parameters I 
> should enact on the 9500s to increase performance?  Should the switch 
> between SATAI and SATAII drives really make that much of difference?  My 
> impression is that while SATA II drives theoretically supported 3Gbps 
> they really come nowhere close -- which is what my results show. 
> However, going 3x as fast for reads and 2x as fast for writes on 
> software RAID was quite surprising.
> 
> Any comments?

In my experience, there is no apples to apples comparison possible.

The speed of components varies widely from mfr to mfr and from product
line to product line.  Quite honestly, manufactures find the best benchmark
for each product they sell and publish only that, and they outright lie
more often than one might believe.

I would suspect that the problem is with the Hitachi drives.  It's quite
possible that some oddball aspect of the drive is pulling performance
down -- maybe the seek time is causing a delay every time the heads
have to relocate.  I've no personal experience with the Escalades, but
I seem to remember hearing good things about them, and a quick search
of the PostgreSQL lists seems to find them in favor.

The argument that "hardware raid is faster than software raid" has been
debunked enough times that I'm surprised it's coming up again.  I've not
time to look for it now, but I remember Greg Lehey demonstrating that
Vinum for FreeBSD was as fast or faster than every hardware RAID card
he could find to test it against.  I doubt that Linux' SW RAID is very
far behind that.

If you think about it, how much can you _really_ improve RAID performance
by moving it in to hardware?  The parity calculations will be faster, but
that's very simple math -- how much time is the CPU really spending on it?
You can keep the data required for the parity calcs off the bus, but how
much does that really improve the speed?  Is the bus saturated?  And, of
course, none of that makes any difference with reads anyway.

-- 
Bill Moran
http://www.potentialtech.com


More information about the wplug mailing list