prevent stuck processes with large folder manipulations
Paul Dekkers
Paul.Dekkers at surfnet.nl
Fri Jan 1 15:45:08 EST 2010
Hi,
>From time to time (but mostly at the start of the year ;-)), I notice a
lot of load caused by people archiving their mail-folders. Maybe this is
mostly caused by Thunderbird going mad, but I was wondering if I could
do anything on the server-side to prevent things from going bad. Because
now I see memory (and swap) exhaustion and the side-effects of that
(Linux kernel killing processes)...
One example: someone was moving tens of thousands of messages from 2009
to a new "2009 folder". Apparently Thunderbird was stuck, maybe because
these things don't happen "instantly" moving this number of messages so
the server doesn't finish quickly: but Thunderbird created a lot (~100)
of sessions / imapd-processes for this user, maybe after timeouts.
(I think) Only one process was active doing the link's, it looked like
the others were mostly waiting for a write lock (fortunately), waiting
to do the same thing. (Inspected with strace.) But when the process that
hogged the CPU was killed, the next process took over, until all similar
processes were killed. And the new archive-folder now ended up with
several duplicates, taking about millions instead of tens of thousands.
(We'll have to see how to dedup that, any ideas are appreciated
otherwise I'll write something for that.)
It just happened, but it happened before. This mail-server is not that
busy, <100 users, but it happens at least a few times per year.
Any idea how to prevent things like this? Judging from the man-pages I
don't think I could do this from within cyrus, but that I would have to
prevent from linux's ulimit or so and tune that (sounds like a tough
job)... or could I actually do this with cyrus parameters?
Curious if people have similar experiences :-)
Regards,
Paul
P.S. This specific machine is running Red Hat 4 and a version of Simon's
(s)rpm.
More information about the Info-cyrus
mailing list