Scaling of imap servers

Bron Gondwana brong at fastmail.fm
Wed Jun 15 07:53:33 EDT 2011


On Wed, 15 Jun 2011 17:03:53 +0530
Ram <ram at netcore.co.in> wrote:

> We need to create a platform for create a large number of cyrus accounts 
> that can scale indefinitely
> I could start with just 2k  users  but could have 20x the number of 
> accounts by next year.
> I was thinking of taking a cloud-based machine at amazon or rackspace 
> and scale hardware vertically  as required.
> But the problem with rackspace or amazon is they do not offer much 
> storage and cap the total storage that can be used.

Worry about IO too.  IO is your killer.

> So what is the best way of creating a scalable setup.

We're really happy with this config:

* 2U rack case
* twin server-grade CPU
* 48Gb RAM
* hardware raid controller with battery backup
* two SSD drives (64Gb plus) in RAID1
* 12 SATA 2Tb drives.

We create 5 x RAID1 volumes with two hotspares for the disks, giving
10Tb of usable space.  Total system cost is about US $13k per machine.

> If I use cyrus-murder still there is always a challenge of using a 
> single mupdate server which cannot handle more than "n"  requests at a 
> time.
> So I plan to use nginx proxy servers that will just redirect the 
> requests and use multiple servers behind it.
> For scaling I will have to add more servers behind the nginx , as long 
> as the proxy can support it.

That's what we do, it's great.

> Even though this is already working , but I dont see it as a long term 
> solution. Backups , HA , DR etc are all not very clean.
> 
> I would like to know how do you guys do the same

HA is overrated.  Manual failover can be achieved within a short time with
a bit of monitoring.

We use replication, because replication rocks.  Much better for both
minor disaster recovery and speed of failover than any backup based
recovery system.

That said, we have backups too.  Unfortunately the backup system is
still quite custom :(  But you can use more general backup system and
just take the additional IO hit.

Bron.


More information about the Info-cyrus mailing list