Cyrus-imapd memory tuning

Andrew Morgan morgan at orst.edu
Mon Mar 10 13:58:06 EDT 2014


On Mon, 10 Mar 2014, Marco wrote:

> My server is:
> Red Hat Enterprise Linux Server release 6.3 (Santiago)
> Without problems I read something like this:
>
>              total       used       free     shared    buffers     cached
> Mem:       8061976    7651020     410956          0    1355964    3412788
> -/+ buffers/cache:    2882268    5179708
> Swap:      4194296      32180    4162116
>
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us
> sy id wa st
>  2  0  32180 386880 1356476 3423712    0    0   643   327   25   18
> 10  4 81  5  0

Those numbers look okay.  Obviously more memory is nice for caching disk 
I/O, but you're doing fine.

> current cyrus.conf:
> SERVICES {
>   # add or remove based on preferences
>   imap          cmd="imapd" listen="imap" prefork=5
>   pop3          cmd="pop3d" listen="pop3" prefork=3
>   sieve         cmd="timsieved" listen="sieve" prefork=0
>   lmtp          cmd="lmtpd -a" listen="lmtp" prefork=0
> }
>
> I have to prevent memory issue when some oddity forces clients to make
> DOS on Cyrus. So I would like to configure the maxchild cyrus
> parameter for imap. I would like to set this value to avoid memory
> issue during normal work, having a known value of system RAM.

Here is what I'm using on a Cyrus backend with 8GB of RAM:

   imap          cmd="/usr/local/cyrus/bin/imapd" listen="imap" proto="tcp4" prefork=10 maxchild=4000
   imaps         cmd="/usr/local/cyrus/bin/imapd -s" listen="imaps" proto="tcp4" prefork=10 maxchild=1000
   sieve         cmd="/usr/local/cyrus/bin/timsieved" listen="sieve" proto="tcp4" prefork=0 maxchild=100
   lmtp          cmd="/usr/local/cyrus/bin/lmtpd" listen="lmtp" proto="tcp4" prefork=1 maxchild=100

I tuned the maxchild setting to balance our usage patterns between the 
imap and imaps ports.  Our highest open connection count is about 1500 
total, so there is quite a bit of headroom.

> I see that an IMAPD process takes in average 22-25MB. With 8GB RAM,
> the server would swap already with less than 400 conns; it not
> happens, so this evaluation is wrong or too many conservative. I think
> that I better consider differences between RSS and SHR memory to
> tuning imapd processes number, but I'm not sure.
>
> Could you help me in this tuning? In particular I'm interested on
> relation between memory usage and maxchild imapd processes.

I'm running a Cyrus Murder cluster with separate frontends and backends, 
so my numbers won't directly correlate.  On a backend with about 700 imapd 
processes, I have the following memory usage:

              total       used       free     shared    buffers     cached
Mem:       8200092    8136084      64008          0    2735124    1614016
-/+ buffers/cache:    3786944    4413148
Swap:      1951736      36544    1915192

> Meanwhile I would also tune the maxfds parameter. With lsof I measure
> about 60 opened files by each imapd process. If I have 400 imapd
> processes it means a 'ulimit -f' global system of 60*400=24000. This
> is wrong, because I currently have a 4096 limit and I never had
> problems. Maybe do I have to consider only 'Running' processes to
> compute this treshold?

My Cyrus init script does:

# Crank up the limits
ulimit -n 209702
ulimit -u 4096
ulimit -c 102400

This particular backend has:

root at cyrus-be1:~# cat /proc/sys/fs/file-nr
25696   0       819000

Again, this is with about 700 imapd processes.

 	Andy


More information about the Info-cyrus mailing list