Doing Cyrus *right* for a large installation

Bron Gondwana brong at fastmail.fm
Thu Jan 15 18:47:34 EST 2009


On Wed, Jan 14, 2009 at 04:42:04PM -0800, Andrew Morgan wrote:
> But then I started thinking about how I was going to backup all this new 
> data...  Our backup administrator isn't too excited about trying to backup 
> 12TB of email data.
> 
> What if we used Cyrus replication in combination with delayed expunge as a 
> form of "backup"?  We currently only keep 1 month of daily backups 
> anyways...

We don't trust that enough as an only solution - in particular, user
deletes will replicate immediately.  Delayed delete as well as delayed
expunge gets you a lot closer.

But we still want real backups that are offline.
 
> For those of you running large installations, how do you back them up?

We use a custom set of libraries and scripts that compress everything
into a tar.gz file.  The file contains actual emails named by their 
sha1, index, expunge and header files named by the mailbox uniqueid,
sieve scripts, seen and subs files, and symlinks from the actual folder
names to to the uniqueid directories.

This allows us to recreate the original mailboxes precisely, but also
copes with message copies between folders (just store the new
cyrus.index file, we already have the sha1 file in backup, so no need
to re-fetch), folder renames (just replace the symlink).

Because we just append to the tar file, a folder rename is actually two
files, the new symlink and a symlink with the old name pointing to an
empty string.

Every so often (starting at 80% live and going up 1% per week, so 
guaranteed to run every 20 weeks) we re-compress the backup, by
zcatting the tar file through a decider function which either pipes
on the file to the output or eats it - meaning only "alive" items
get kept.  Even a user who never deletes anything will have stale
old copies of their cyrus.index files, because we take one snapshot
per day.

----

This is supported by a server on each imap machine which provides
direct access to all the files.  It uses fcntl locking so that it
will block while Cyrus is using the files, and so that Cyrus will
block while it is working.

The meta directories are handled in two passes.  The first pass is
just a stat pass, where we ask for a stat(2) on each file.  If ANY
of them have changed, we come back and lock all the files in the
directory, before streaming them all to the backup tool.  That way
we're guaranteed consistency.

Data files we just fetch and check the sha1 on the result.  They
either fetch correctly or we abort with an error for the sysadmin
(that would be me) to check and fix.  It should never happen.

The one inconsistency we do have is that seen files aren't locked
for the entire time, so they might fall out of sync with mailboxes.
I don't consider this a giant risk in practice.

----

I am in the process of tidying up and documenting all the modules
that we use for processing backups.  I'd like to publish the whole
lot, perhaps even putting them inside the "perl" directory of the
Cyrus distribution so they can be kept in sync with the rest of
Cyrus.  They do include pack/unpack re-implementations of the
cyrus.index and cyrus.header formats.  I've also got a perl
implementation of the skiplist format that I should polish up and
include.

Bron.

Bron.


More information about the Info-cyrus mailing list