Cyrus and scale-out

Alvin Starr alvin at netvel.net
Sun Jun 12 19:30:21 EDT 2016


I was under the impression that Cyrus Murder handled the horizontal 
scale out of mailboxes across multiple servers.

I have not yet needed to scale up to the point where this would be an 
issue  but I would love to know what the answer would be.


On 06/12/2016 10:14 AM, Bron Gondwana via Info-cyrus wrote:
> Funny you should ask :)
>
> http://asg.andrew.cmu.edu/archive/message.php?mailbox=archive.cyrus-devel&msg=4939
>
> I definitely have plans of allowing everything to be written back in a reliable way so that losing an IMAP server is guaranteed(within the bounds of software reliability and all the parts following their contracts) to not lose anything which has been acknowledged back to the user!
>
> Bron.
>
> On Sat, Jun 11, 2016, at 01:18, Sebastian Hagedorn via Info-cyrus wrote:
>> Hi,
>>
>> our systems guys keep telling us that we are doing things in an
>> old-fashioned way and should get with the program.
>>
>> We are currently using a single Cyrus server with roughly 13 TB of storage
>> provided by a SAN. We used to have a Red Hat High Availability cluster, but
>> we traded that in for a VMWare HA setup earlier this year. So far we have
>> scaled up. We have added processors, RAM and storage to that single
>> (virtual) machine whenever necessary.
>>
>> According to our systems people, we should scale out instead, the way
>> Exchange 2013 and Dovecot Pro apparently do. The idea, as I understand it,
>> is to have multiple backends that all provide access to the same mailboxes.
>> It should be possible to add and remove backends completely transparently.
>> Dovecot Pro seems to realize that by storing all mails in local caches
>> backed by shared object storage (e.g. Ceph), in conjunction with Dovecot
>> Director.
>>
>> Now I'm trying to understand if anything like that is on the roadmap for
>> Cyrus. I see that Cyrus 3.0 (experimentally) supports object storage, but
>> only for archive partitions. Are there plans for Cyrus 3.1 or later to add
>> support for regular mail partitions as well?
>>
>> Personally I'm stil happy with our setup, but I'm told that future storage
>> hardware won't easily support what we're doing anymore. I'm aware that both
>> clustering and replication are already possible with Cyrus, but my
>> understanding is that you can't trivially and automatically switch to a
>> replicated backend if one goes down. You also need to replicate all
>> messages to each new backend you introduce, which isn't quite what our
>> systems people would like to have.
>>
>> Thanks
>> Sebastian
>> -- 
>>      .:.Sebastian Hagedorn - Weyertal 121 (Gebäude 133), Zimmer 2.02.:.
>>                   .:.Regionales Rechenzentrum (RRZK).:.
>>     .:.Universität zu Köln / Cologne University - ✆ +49-221-470-89578.:.
>> ----
>> Cyrus Home Page: http://www.cyrusimap.org/
>> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>> To Unsubscribe:
>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>> Email had 1 attachment:
>> + Attachment1.2
>>    1k (application/pgp-signature)
>

-- 
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
alvin at netvel.net              ||

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20160612/fc63a110/attachment.html>


More information about the Info-cyrus mailing list