Clustering and replication
Janne Peltonen
janne.peltonen at helsinki.fi
Tue Jan 30 04:33:20 EST 2007
On Mon, Jan 29, 2007 at 04:14:33PM -0800, Tom Samplonius wrote:
>
> ----- "Simon Matter" <simon.matter at invoca.ch> wrote:
> >
> > Believe it or not, it works and has been confirmed by several peoples on
> > the list using different shared filesystems (VeritasFS and Tru64 comes to
> > mind). In one thing you are right, it doesn't work with BerkeleyDB.
> > Just switch all your BDB's to skiplist and it works. This has really been
> > discussed on the list again and again and isn't it nice to know that
> > clustering cyrus that way works?
>
> Yes, useful. But the original poster wanted to combine Cyrus
> application replication with a cluster filesystem (GFS
> specifically). It seems pretty unusual to combine both. GFS has a
> lot of locking overhead of writing, and e-mail storage is pretty
> write intensive. And Cyrus replication can have its own performance
> issues (slow replication that never catches up). Why do both at the
> same time?
The reason to use a clustering FS: if it works, it is very simple. Each
node can be more or less identical. And, therefore, more or less
redundant. And the system is scalable (just add a node), to the point
where the GFS locking overhead begins to be the bottleneck. And HA is
simple: if you want to do maintenance a node, just get it offline -
nobody'll notice.
The reason to use replication: The "master" cluster has but one
filesystem. Of course it's on a SAN, RAID, blah. But if the SAN fails -
such a thing happened a couple years ago, even SANs fail - we lose all
mail received since last backup. Not nice. (But if combining these two
really result in severe performance losses, we might have to
reconsider.)
> And GFS 6.1 (current version) has some issues with large directories:
>
> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=214239
This might be a problem, since we have some users that really have 10000
messages in their INBOX. Although it seems that Cyrus itself cannot cope
with this either... in our current, non-clustering setup. But then, it's
an old version.
--Janne
More information about the Info-cyrus
mailing list