Miserable performance of cyrus-imapd 2.3.9 -- seems to be lockingissues

David Lang david.lang at digitalinsight.com
Tue Mar 4 11:53:14 EST 2008


On Tue, 4 Mar 2008, Ian G Batten wrote:

>  software RAID5 is a performance
> disaster area at the best of times unless it can take advantage of
> intimate knowledge of the intent log in the filesystem (RAID-Z does
> this),

actually, unless you have top-notch hardware raid controllers, software raid 5 
may be better then hardware raid 5. many controllers only do a decent job doing 
raid 0 or raid 1. this is something to measure with your particular hardware. 
I've seen many cases where the cards do a horrible job with raid 5 compared to 
software.

> and three-disk RAID5 assemblages are a performance disaster
> area irrespective of hardware in a failure scenario.  The rebuild
> will involve taking 50% of the IO bandwidth of the two remaining
> disks in order to saturate the new target; rebuild performance ---
> contrary to intuition --- improves with larger assemblages as you can
> saturate the replacement disk with less and less of the bandwidth of
> the surviving spindles.

this is true, up to the point where the bus gets saturated with the re-sync 
info, after that more disks will not improve the rebuild time.

> For a terabyte, 3x500GB SATA drives in a RAID5 group will be blown
> out of the water by 4x500GB SATA drives in a RAID 0+1 configuration
> in terms of performance and (especially) latency, especially if it
> can do the Solaris trick of not faulting an entire RAID 0 sub-group
> if one spindle fails.  Rebuild still isn't pretty, mind you.

either of these cases will survive a single drive failure, what I would look at 
is either 3x1TB drives in raid 1, or 4x500G drives in raid 6 to get the ability 
to survive 2 drives failing.

it takes long enough to rebuild an array with large drives that the chances of a 
second drive failing during the rebuild become noticable.

David Lang


More information about the Info-cyrus mailing list