doing replication from two machine to one machine

Rudy Gevaert Rudy.Gevaert at UGent.be
Fri May 5 03:21:04 EDT 2006


Bron Gondwana wrote:
> On Thu, May 04, 2006 at 02:35:55PM +0200, Rudy Gevaert wrote:
> 
>>For now we are not going to use that in a murder.  At [ 
>>http://users.ugent.be/~rgevaert/HA-IMAP-2.png ] you can find a picture 
>>of the setup we are aiming at.
> 
> 
> Looks rather similar to what we're doing at FastMail.FM.
> 
> 
>>Each user has his mailbox on one mailstore.  Each mailstore will be 
>>running an perdition imap/pop proxy that will redirect each user to the 
>>correct mailstore.
>>
>>Each mailstore has a lun mounted from the local storage device.
>>
>>I would set up repliction like this:
>>
>>Mailstore 1 till 3, that are on site Rectoraat, are replicated to the 
>>replication server at the site S9 (replication mailstore mgr 2).
>>
>>Mailstore 4 till 6, that are on site S9,  are replicated to the 
>>replication server at the site Rectoraat (replication mailstore mgr 1).
>>
>>The replication masters are connected the same storage device as the 
>>local mailstores., but are on slower disks.
> 
> 
> We're doing something similar as well.  [disclaimer: this isn't actually
> up and running in production yet, but seems OK in testing]
> 
> What we've done is create separate IP addresses for each instance of
> Cyrus on each server, and then create a separate imapd.conf and
> cyrus.conf.  These are all generated using an unholy mix of Perl and
> Makefiles to ensure that they all match up on all servers.  We then
> start one whole instance of Cyrus bound to the specific IP address we've
> set aside for that service for each master and each replica.
> 
> 
>>In this setup we always have a backup of our data at the remote site.
>>
>>I have thought of the following disasters scenarios:
>>
>>* a blade server fails -> a spare blade server takes over, using the storage
>>* blade chassis fails -> the replication mailstore mgr at the same site 
>>takes over for the mailstores in the failed chassis (and continues to do 
>>the replication for the mailstores at the other site)
>>* the whole site fails -> the replication mailstore mgr at the other 
>>site takes over with the replicated data.
> 
> 
> Does this mean you're taking two replica copies of each data set? 

No.   If a blade chassis fails we still have our data, because that is 
on the SAN.  We then just simply connect the X4100 to the SAN and let 
him take over.

> I
> don't know that that's possible at the moment without much messing
> around.  Certainly replicating two different sets onto one machine is,
> just create two independant Cyrus instances with separate configs and
> directories and run them on different IPs/ports (IPs are easy IMHO)

I hadn't thought of it like this!  I was thinking I needed to use 
partitioning to make it work.  So we could easily take over the services.

But using it with different IP's. Eell, that just seems more easy!
I'll look into it.

> It would be very nice to have something like mysql's binlog system where
> each slave server is responsible for polling tracking the master's log,
> and we run a cron job which polls the slaves to make sure they're all
> up-to-date then deletes the no longer needed binlog files.  That seems
> more flexible.
> 
> I also haven't worked with blade servers.  All our hosts have either
> built in or SCSI attached storage which isn't shared.
> 

Thanks for your input on this!

-- 
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Rudy Gevaert                             e-mail: Rudy.Gevaert at UGent.be
Directie ICT, Afdeling Infrastructuur
Groep Systemen                                      tel: +32 9 264 4734
Universiteit Gent / Ghent University                fax: +32 9 264 4994
Krijgslaan 281, gebouw S9, 9000 Gent, Belgie               www.UGent.be
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --


More information about the Info-cyrus mailing list