imap scalability

Andrew Morgan morgan at orst.edu
Thu Oct 7 12:33:38 EDT 2004


Wasn't there an entry in the Cyrus Wiki at one point for a list of the
current hardware configs that people were using with Cyrus?  I can't seem
to find it now...

Also, there is some weird output on some of the Wiki pages, such as:

http://acs-wiki.andrew.cmu.edu/twiki/bin/view/Cyrus/MboxCyrusMigration

Below the content section of the page is a huge list of URLs, all of them
with special html entities that my browser can't display.  It starts with:

[http://www.haishun.net ] [http://www.haishun.net ]
[http://www.genset-sh.com ] [http://www.haishun.net/p_mjds.htm ]

and goes on and on.  Is anyone else seeing this?

	Andy

On Thu, 7 Oct 2004, denz-wavenet wrote:

> hi!
>
> Can u send us ur configuration for software and hardware for a case study.
> How's redundancy is implemented. How r the backups implemented ?
> We hope to use spampd with spammassasin, will this be a concern. For now
> no antivirus scanning is thought of.
>
> denzel.
>
> ----- Original Message -----
> From: "Michael Loftis" <mloftis at wgops.com>
> To: "denz-wavenet" <denz at wavenet.lk>; <info-cyrus at lists.andrew.cmu.edu>
> Sent: Thursday, October 07, 2004 10:24 AM
> Subject: Re: imap scalability
>
>
> >
> > --On Thursday, October 07, 2004 09:54 +0600 denz-wavenet <denz at wavenet.lk>
> > wrote:
> >
> > > hi!
> > >
> > > Requirements:    host 10,0000 IMAP mailboxes
> > >
> > > Usual setup:  LDAP/SMTP-postfix/cyrus-iamp
> >
> > First -- do not run Cyrus over NFS, just don't do it.  Second, do not
> share
> > spool areas, cyrus does not handle this.
> >
> > If you've got compliant clients and servers then NFS *might* work.  You
> > can't use a Linux NFS server, it doesn't qualify.  And I'm pretty sure the
> > Linux NFS client is also in the doesn't' qualify area.  And performance
> > will suck unless you go full Gig-E on a separate back-end network, and
> even
> > then, DAS or SAN will give you far more attractive results.
> >
> > Cyrus broaches the scaling issue with MURDER and using multiple back-ends
> > but they DO NOT SHARE storage.  At all.
> >
> > That server may or may not be enough.  It really depends on what you mean
> > by 10,000 users.  IF you mean 10k concurrent connections, no.  Also if it
> > has IDE drives, even if you mean 10k mailboxes forget about it, IDE drives
> > aren't going to keep up, you'll need about 3k random block io/second
> > performance bare minimum using reiserfs and about 30-40G of mail data,
> > other filesystems will have other patterns, ext2/3 will prolly see a LOT
> > more read traffic due to the nature of it's inode layout.  Those numbers
> > come from my live system where we've got around 12k mailboxes and prolly
> > 3k-4k users, and about a million envelopes/day of mail volume coming in,
> > with a LOT being dropped before that count by using DNS blacklists up
> front.
> >
> > Scaling it also depends on your inbound mail, and mail flow, you going to
> > be running AV scanning?  how about SpamAssassin?  going to allow the users
> > to run scripts on their mail (i recommend not....) -- and by that I mean
> > *not* SIEVE, like procmail or similar.
> >
> > At first glance, presumign that box has a beefy, and i mean beefy  -- like
> > 7x10K RPM U160 SCSI drives on a real RAID subsystem, like a nice higher
> end
> > ICP Vortex card -- it should manage, it may get a little tight at times
> but
> > it should manage it alright.  as long as you dont' mean 10k concurrent
> > sessions, then you're outside the league of a single box, talking about a
> > decent sized load balanced front-end/back-end group of systems.
> >
>
> ---
> Cyrus Home Page: http://asg.web.cmu.edu/cyrus
> Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
> List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
>
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html




More information about the Info-cyrus mailing list