Using a SAN/GPFS with cyrus
Stephen L. Ulmer
ulmer at ufl.edu
Sun Feb 1 13:38:26 EST 2004
On 21 Jan 2004, prentice at rcsb.rutgers.edu spake:
> I'm installing Cyrus on a ssytem that will have access to an IBM
> FAStT SAN with GPFS (a parallel filesystem allowing multiple servers
> to share a filesystem on a SAN).
>
> For redundancy, I was thinking of creating the IMAP folder dir and
> spool dir on the SAN and then having two mailservers setup
> identically using cyrus. If the primary server goes down for any
> reason, the secondary would automatically begin receiving/delivering
> mail based on the MX records in DNS.
>
> Would this present any problems with cyrus if two servers are
> accessing the same directories/files? GPFS should manage file
> sharing, but I'm wondering if there are any know problems with Cyrus
> in this configuration.
>
> Has anyone done this before?
FOR THE LOVE OF GOD, RUN AWAY!
We had our Cyrus message store on GPFS[1] for just about a year. I've
been a Unix systems administrator for almost 15 years; It was the
worst single judgment of my professional career. Period.
During the 18 months when we had GPFS deployed, my unit had TWO crit
sits[2] and uncovered over 30 bugs[3] in the GPFS software alone (not
counting stuff we found in VSD, AIX, et cetera). The situation ended
with the GPFS architect suggesting that we do something else. He's a
great guy, and he helped us many times, but the product just doesn't
do what we wanted.
GPFS is the successor to the MultiMedia Filesystem, which was used in
IBM's Videocharger product. It's *excellent* at streaming small
numbers of large files (like, say, movies). It's horrible when you
get above a few hundred-thousand files, as the systems can't possibly
have enough memory to keep track of the filesystem meta-data.
Our Cyrus installation has about 80K users, close to 1TB of disk, and
many millions of files. Just the number of files alone would be enough
to kill the idea of using GPFS.
Cyrus makes pretty extensive use of mmap(), and so does
BerkelyDB. While GPFS implements[4] mmap(), the GPFS architect had
some words about the way certain operations are accomplished in Cyrus.
I think there are (or used to be) places where an mmap'd file is
opened for write with another file handle (or from another process).
GPFS doesn't handle this well. This technique works accidentally on
non-clustered filesystems because AIX (also) mmap's things for you
behind your back (in addition to whatever you do) and then serializes
all of the access to regions of those files. That's really the only
reason why Cyrus works on JFS.
Also note that the other groups/developers within IBM (especially the
group that does the automounter) have their collective heads up their
ass with respect to supporting "after market" filesystems on AIX.
After two freakin' years of PMRs they still couldn't figure out how to
make autofs interact predictably with locally-mounted GPFSs. I
constantly had to employ work-arounds in my automounter maps.
If you just want failover, then use the HACMP[5] product to deal with
the failover scenario. If you need to scale beyond one system image,
try out a Cyrus Murder. That's what we're using now, it works great.
Note that in the Murder scenario, you can still use HACMP to have one
of your back-ends take over for another if it fails. You just have to
carefully craft your cyrus.conf files to only bind to a single IP
address, so that you can run two separate instances of Cyrus on the
same machine during the failover.
I will be happy to discuss our particular situation, implementation
choices and details with you if you'd like to contact me out-of-band.
We're currently running our Murder on:
2 x p630 [backends]
4 x 1.4GHz Power4+ CPU
8GB Real Memory
4 x p615 [frontends]
2 x 1.2GHz Pwer4+ CPU
4GB Real Memory
The frontends are also the web servers for our VirtualHosting
cluster. We're running version 2.1.x of Cyrus. Now that 2.2.x is
stable we'll upgrade, but you can imagine that it'll take some
planning. ;)
[1] GPFS: using CVSD then RVSD in our SP
[2] crit sit: Critical Situation: IBM's tool for managing
barely-tenable customer relationship situations
[3] Something like 90% of our PMRs resulted in code changes
[4] In typical IBM fasion, they implemented *exactly* the POSIX
specification, and not an penny more. I'm not convinced that this
is bad, but it bites me a lot.
[5] High Availability Cluster Multi-Processing(tm), IBM
Regards,
--
Stephen L. Ulmer ulmer at nersp.nerdc.ufl.edu
Senior Systems Programmer http://www.ulmer.org/
Computing and Network Services VOX: (352) 392-2061
University of Florida FAX: (352) 392-9440
More information about the Info-cyrus
mailing list