jc at irbs.com
Fri Mar 2 19:58:40 EST 2007
Quoting Andre Nathan (andre at digirati.com.br):
> On Thu, 2007-03-01 at 09:25 +0100, lst_hoe01 at kwsoft.de wrote:
> > A few thousand lmtpd would be *way* too much because they all use the
> > same I/O bottleneck (if you don't have partitions on different I/O
> > paths). For a single I/O path i would recommend not more than some 10
> > .. 20 concurrent lmtpd with the exception if you are having complex
> > sieve rules which adds to latency.
> Huh, sorry... read hundreds where I wrote thousands... during peak times
> there 200-250 lmtpd processes. Anyway, it's still too much.
> The partitions are mounted remotely using ATA over Ethernet, with jumbo
> frames enabled. It's an 8-disk RAID-5 array. I'm not sure if the network
> can be the bottleneck. The snmp statistics don't show full utilization
> of the gigabit link.
> I can lower the maximum number of lmtpd processes, but the problem is
> that given the number of connections made to lmtpd from the MTAs, it'll
> quickly reach that number and start bouncing messages.
Postfix should not bounce if it can't connect to a LMTP server.
Queue and retry would be normal when a conenction to an LMTPor SMTP
I would limit the LMTP concurrency to Cyrus either with a transport
entry in master.cf or with the lmtp_destination_concurrency_limit
knob if you are running 2.3 and later. Transports will let you
tune per destination.
Other 2.3.X knobs that you should look at.
lmtp_connection_cache_destinations = your Cyrus server names
lmtp_connection_cache_time_limit = 60s or something more than the 2s default
connection_cache_ttl_limit = 60s same as above or larger
If contention for locks is the problem, reducing the concurrecny
will surely help. Keeping the LMTP conenctions open with connection
caching saves the setup/teardown time for a delivery on both ends.
More information about the Info-cyrus