Slow lmtpd
Simon Matter
simon.matter at invoca.ch
Sat Mar 3 07:14:59 EST 2007
>> From the earlier discussion on this topic, it sounds to me like you are
>> simply pushing your available hardware too hard without further tuning.
>> You mentioned using ATA-over-Ethernet storage for your mail spool. Have
>> you considered putting your configdirectory files on a local hard drive
>> instead of on the ATA-over-Ethernet storage? There is a *lot* of
>> contention for the files in the config directory, so maybe it would be
>> better to move them onto a drive separate from the mail spool.
>
> The machine actually doesn't have any local disks (it's booted via pxe
> and the root partition is also on AoE). The directories /var/spool/imap
> and /var/lib/imap are each on its own LVM logical volume.
As long as both logical volumes reside on the same storage, I don't think
it makes much difference on how many lv's and filesystems you spread those
directories. Whenever I hear about such issues I first remember those old
database gurus calling for spindles, spindles and spindles (which also
meant paths in the older days when shared, intelligent storage with large
caches and all that nice virtual disk stuff was rare). My still limited
experience with shared storage (in my case HP EVA3000) gave me the
following rules for running cyrus-imapd:
1) Try to put different kind of data (spool, meta databases) on
independant storage (which means independant paths, disks, SAN
controllers). For the small things like cyrus databases, putting them on
separate local attached SCSI/SATA disks seems a very good idea. From what
I know about AoE I think it will always suffer latency problems compared
to FC or SCSI, simply because ethernet cards are not exactly designed for
that kind of task.
2) If you have any kind of shared storage, make sure that a sinlge system
can not bring performance down, because it will slow down other machines
and also block threads/processes on the single machine. In our case with
HP EVA3000 it's very important to limit FC/SCSI queuedepth, because one
Linux server can otherwise saturate the EVA controllers I/O and others
systems will extremly slow down accessing disks. We do that with the
option "ql2xmaxqdepth=16" on the Qlogic FC adapters. It simply sets the
value which can be found in /sys/block/sd?/device/queue_depth.
3) Always test performance. Never trust on simple raw speed tests, never
think hardware RAID, vdisk controllers, whatever alway performs better
than doing it in software.
Sorry it doesn't help you much, I just tried to share some of my experiences.
Simon
>
> I mounted /var/lib/imap/proc as a memory-based filesystem (using tmpfs),
> because of the contant writes to this directory, and yesterday I tried
> moving deliver.db to that directory, and creating a symlink, but it
> didn't improve the situation a lot.
>
>> After running iostat on my cyrus partition (both config and mail spool
>> are
>> kept on a SAN), I'm wondering if I should separate them out as well.
>> This
>> sounds related to the new metapartition and metapartition_files options
>> that were added in v2.3.x.
>>
>> Does anyone have any recommendations or guidance on this topic?
>
> Yes, people, please share :)
>
>
> Thanks for the suggestions,
> Andre
>
> ----
> Cyrus Home Page: http://cyrusimap.web.cmu.edu/
> Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
> List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
>
More information about the Info-cyrus
mailing list