Load spikes when new email arrives

francis picabia fpicabia at gmail.com
Thu Jan 24 10:20:26 EST 2013


On Wed, Jan 23, 2013 at 5:25 PM, Andrew Morgan <morgan at orst.edu> wrote:

> On Wed, 23 Jan 2013, francis picabia wrote:
>
>  Thanks for the response.  I have been checking my iostat whenever there is
>> a number of messages in the active queue.
>>
>> Here is a sample snapshot from a script I run (ignoring the first
>> iostat output of averages):
>>
>> Active in queue: 193
>> 12:47:01 up 5 days,  5:23,  6 users,  load average: 14.11, 9.22, 4.67
>>
>> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
>> avgqu-sz   await  svctm  %util
>> sda5              3.25   281.00 19.75 129.50   654.00  3384.00    27.06
>> 5.53   36.24   6.69  99.80
>>
>> svctm is about the same as when not under load and it went above 7 only
>> once.
>> Then there is this comment about the validity of tracking svctm:
>> http://www.xaprb.com/blog/**2010/09/06/beware-of-svctm-in-**
>> linuxs-iostat/<http://www.xaprb.com/blog/2010/09/06/beware-of-svctm-in-linuxs-iostat/>
>>
>> %util is often reaching close to %100 when there is a queue to process.
>>
>> sda5 is where the cyrus mail/imap lives.  Our account names all begin with
>> numbers, so almost all mail accounts are under the q folder.
>>
>
> Okay, I didn't realize svctm could be suspect, although I guess that makes
> sense in a RAID array.  What about your await times?  Does await increase
> during peak loads?
>
> It seems pretty clear from iostat that you are IO bound on writes during
> mail delivery.  As Vincent said in his reply, RAID5 performs poorly during
> writes.  Each write actually consumes 4 disk operations (read old data,
> read old parity, write new data, write new parity).  If you can live with
> the slight additional risk, turn on write caching on the Perc 5/i if you
> haven't already.  I think they call it "write-back" versus "write-through".
>
> If you can handle it, you would probably be a lot happier converting that
> RAID5 set to RAID10.  You'll lose a disk worth of capacity, but get double
> the write performance.
>
> However, what is your real goal?  Do you want to deliver mail more
> quickly, or do you want to reduce your load average?  You can probably
> reduce your load average and perhaps gain a bit of speed by tweaking the
> lmtp maxchild limit.  If you really need to deliver mail more quickly, then
> you need to throw more IOPS at it.
>
> Let's keep this discussion going!  There are lots of ways to tune for
> performance.  I've probably missed some.  :)
>
>
In another email discussion on the Redhat mailing list, I've confirmed we
have
an issue with partition alignment.  This is getting to be quite the mess
out there.  I saw one posting where it is speculated there are thousands of
poorly set up disk partitions for their RAID stripe size.  fdisk and
OS installers were late getting updated for the new TB disks
and SSD disks as well.  Partition alignment might account
for 5 to 30% of a performance hit.

I've checked and my cyrus lmtpd process count
never exceeds 11 under work load.
await jumps up to 150-195 at worst.

If I'm already at IO saturation, I can't see how a higher lmtpd limit
would help.

My goal is to keep the system load reasonable so it is responsive for
mailbox access by the end users.  Right now we get nagios alerts
about 6 times a day for excessive load.  If I can move the mail
queue workload into a hill instead of a sharp peak on the cacti
load graph, it would be good.  There are minutes around the peaks
where the queue is emptied and we have only 5 messages
inbound per minute.

In hind sight, I agree RAID 10 should have been implemented.
At the time, four years ago, getting lots of space was the
priority as space needs always grow.  We've never seen load
issues until this month, and it seems to coincide with a
general increase of all email volume and traffic.  Our primary
MX is also getting hit more than normal.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20130124/f9020e9b/attachment-0001.html 


More information about the Info-cyrus mailing list