Cyrus, clusters, GFS - HA yet again
Janne Peltonen
janne.peltonen at helsinki.fi
Mon Oct 30 03:26:09 EST 2006
Hi,
thanks for the answers!
On Fri, Oct 27, 2006 at 07:22:43AM -0400, Dave McMurtrie wrote:
> >And if further splitting of users on more servers is needed - downtime
> >again. Moreover, it's confusing for the users to have to determine their
> >correct imap server name - we haven't really had trouble with this, but
> >it would be nice if the users saw a unified system image.
> If you decide not to pursue a cluster solution, Perdition would probably
> help you with this part.
Ok. Is there any gain in using Perdition instead of Murder? Is it more
stable? Less complicated? More widely used? Better suited to the system
of our size (why)?
> >Enter weirdness. The first Cyrus to be started starts with no
> >complaints and ends up with the correct number (as specified in
> >/etc/cyrus.conf) of imapd, imapd -s, pop3d, lmtp etc. processes, all in
> >state S, only one process at a time having a write lock on
> >/var/lib/imap/socket/xxx-N.lock.
> These lockfiles are used to serialize their corresponding processes
> (imap, lmtp, etc) on a per-host basis, not across cluster nodes. As
> such, you should write these to the local filesystem and not the cluster
> filesystem. You can accomplish this with symbolic links.
Thanks, that helped. Now both cyrus-imapds run without blocking.
But I still seem to get some weird DB errors, the same I used to: if I
log in and out on the node on which Cyrus was started first, the imapd
process that accepted my connection complains about DBERROR on exit:
--clip--
Oct 30 09:21:19 lcluster2 imap[10378]: login: localhost.localdomain
[127.0.0.1] cyrus plaintext User logged in
Oct 30 09:22:21 lcluster2 imap[10378]: DBERROR db4: PANIC: fatal region
error detected; run recovery
Oct 30 09:22:21 lcluster2 imap[10378]: DBERROR: critical database
situation
Oct 30 09:22:21 lcluster2 master[10368]: process 10378 exited, status 75
Oct 30 09:22:21 lcluster2 master[10368]: service imap pid 10378 in READY
state: terminated abnormally
--clip--
The log doesn't tell which database is corrupted, nothing seems to
actually get corrupted, and I can log in and out on the other node (the
one that I started cyrus later) without it seeing any database
corruption:
--clip--
Oct 30 10:00:09 lcluster1 imap[10108]: login: localhost.localdomain
[127.0.0.1] cyrus plaintext User logged in
Oct 30 10:01:11 lcluster1 master[10092]: process 10108 exited, status 0
--clip--
Now there seems to have been a difference of opinion on this list about
whether Cyrus databases can reside on clustered filesystems. According
to this posti from Chris St Pierre:
http://irbs.net/internet/info-cyrus/0607/0442.html
, they can't, and according to this reply from Scott Adkins:
http://irbs.net/internet/info-cyrus/0607/0449.html
, they can and do.
Back in 2004, Ken Murchison (
http://www.irbs.net/internet/info-cyrus/0406/0537.html
):
--clip--
John C. Amodeo wrote:
> I would be very interested to know if anyone is using a Cyrus cluster
> connected to a SAN using a cluster file system. Would Cyrus die if two
> servers were trying to access the same mailstore / db files?
As long as the filesystem provides correct locking and memory mapping,
the Cyrus processes don't care.
--clip--
and, moreover (
http://www.irbs.net/internet/info-cyrus/0406/0538.html
):
--clip--
> Could one theoretically create a
> redundant, loadbalancing cluster using two boxes, GFS and a SAN?
Yes, see my earlier post about the 2.3 branch.
--clip--
But the above-mentioned earlier post (actually, the one my first clip is
from) contained something abt using MUPDATE to synchronize the
mailboxes.db between the two clients, and using some esoteric DB formats
with deliver.db and tls_sessions.db. On the other hand, the post by
Scott Adkins made no mention of using such things.
So I'm confused. I /am/ using a clustering FS with correct locking and
memory mapping, but I'm still getting the (apparent) DB errors Chris and
others said there would be. That's why wanted to ask those that have
succeeded in creating a clustered Cyrus - be it on Veritas or the Tru64
cluster system - how did they do it.
> I believe you'll also need to make some minor code changes. When
> University of Pittsburgh implemented their Cyrus cluster, they added a
> nodename config option and then used that nodename as a filename
> component along with the pid for the lmtp temporary deliver files such
> that the filenames would be unique across cluster nodes. Without
> modification, only the pid is used as a means to make the filenames unique.
Thanks, I'll look into this if I can find my way around those DB errors
first.
--Janne
More information about the Info-cyrus
mailing list