I/O Errors
Andrew Morgan
morgan at orst.edu
Wed Nov 9 12:39:52 EST 2005
On Wed, 9 Nov 2005, Pascal Mouret wrote:
> Hello all,
>
> Since I upgraded Cyrus-imap to v2.2.12, I'm experiencing a lot of I/O errors,
> due to the number of open files. Here is a short exceprt of what I can read
> in imapd.log :
> [...]
> Nov 8 16:15:53 mailup pop3[281152]: IOERROR: opening
> /var/spool/imap/user/lafitte/cyrus.index: Too many open files
> Nov 8 16:15:53 mailup pop3[281152]: Unable to lock maildrop for lafitte:
> System I/O error
> [...]
> Nov 8 16:41:40 mailup imap[267212]: IOERROR: opening
> /var/spool/imap/user/debast/Trash/cyrus.cache: Too many open files
> [...]
> Nov 8 16:41:56 mailup imap[267212]: IOERROR: opening
> /var/imap/user/d/debast.seen: Too many open files
> Nov 8 16:41:56 mailup imap[267212]: DBERROR: error fetching txn cyrusdb
> error
> Nov 8 16:41:56 mailup imap[267212]: Could not open seen state for debast
> (System I/O error)
> Nov 8 16:41:56 mailup imap[267212]: IOERROR: creating
> /var/spool/imap/user/debast/cyrus.index.NEW: Too many open files
> [...]
> Nov 8 16:42:20 mailup imap[267212]: IOERROR: opening
> /var/spool/imap/user/waguet/cyrus.index: Too many open files
> [...]
>
> I've got about one such error in every minute, and all goes ok for the rest
> of the time.
> I checked about my system settings. It does not seem to be a problem of a
> global max number of open files as no other process reports such errors. It
> appears I may tune a maximum number of open files authorized per process, but
> before tweaking that, which appears uneasy to me, I was wondering whether
> there may be an error in my configuration.
> I have got about 1000 concurrent users, on a total of 2000 users. I had no
> such problems before upgrading. I kept the same configuration except that I
> changed from BerkeleyDB to skiplist for all databases (except for seendb
> which still uses "flat")
> Has anyone already encountered that ?
> Any idea ?
> Any hint would be greatly appreciated
> Thank you very much in advance
>
> Pascal Mouret
I crank up the resource limits for cyrus in the init script for it, as
follows:
# Crank up the file limits
ulimit -n 209702
ulimit -u 2048
Obviously, I've already increased the system-wide limit to accomodate
this. My system has about half the number of concurrent users as yours.
Here is what I see for open file usage:
cyrus-be1:/proc/sys/fs# cat file-nr
19560 0 205988
My limit of overkill probably, but I'd rather play it safe.
Andy
More information about the Info-cyrus
mailing list