Adding archiving to an existing Cyrus installation
Bron Gondwana
brong at fastmailteam.com
Thu Nov 9 15:48:35 EST 2017
On Fri, 10 Nov 2017, at 00:02, Nic Bernstein wrote:
> Bron,
> Thanks for the response. Your solution has pointed us towards the
> proper approach. We are using ZFS, so would be performing ZFS
> send/receive replication rather than "mv", to move the filesystems.
> Our typical approach for this sort of thing, then, would be:>
> * Create a filesystem snapshot
> * zfs snapshot $ropt ${SOURCE_FS}@$newsnap
> * Perform ZFS send/receive, something like this:
> * zfs send -p $SENDVERBOSE $cloneorigin@$sourcestartsnap | zfs recv
> -u $RECVVERBOSE -F "$destfs"
> * Then, once we've completed a replication, we need to quiesce the
> filesystem and do the same, again, to catch-up to current state
That's basically exactly how FastMail's user moves work, and the sync-
protocol based XFER that Ken built based on it. Except inside Cyrus
rather than at the FS level.
> * Finally, replace the existing filesystem with the new replica, and
> discard the original> To quiesce the filesystem, we would normally like to tell whatever
> applications are using it to temporarily freeze operations, so the
> underlying filesystem is in a consistent state. When I was in
> Melbourne, back in April, we (you, Ellie & I) discussed what would be
> needed to introduce such a feature to Cyrus. I'm curious if there's
> been any further discussion or work on this? Should I open a feature
> request?
There has been discussion, not yet work.
https://github.com/cyrusimap/cyrus-imapd/issues/1763
> It would be nice to be able to complete consistent snapshots for
> filesystem operations like replication or backup, and this is a
> feature of many large applications, such as mail and DB servers.
Yep :) It's on our radar. There's a whole lot of work planned - Ken is
going to write up the notes on that at some point. We had a planning
meeting while he was in Australia.
> Thanks again for shining a welcome light on how to achieve the goal
> of adding archive to an existing system.>
> Cheers,
> -nic
>
>
> On 11/04/2017 10:59 PM, Bron Gondwana wrote:
>> Hi Nic,
>>
>> Sorry I didn't get back to answering you on this the other day!
>>
>> So... this one is kinda tricky, because everything is going to be on
>> "spool", but here's how I would do it.>>
>> Before:
>>
>> /mnt/smalldisk/conf -> meta files only
>> /mnt/bigdisk/spool -> all email right now
>>
>> Stage 1: splitting:
>>
>> /mnt/smalldisk/conf
>> /mnt/bigdisk/spool
>> /mnt/bigdisk/spool-archive -> empty
>>
>> And set archivepartition-default to /mnt/bigdisk/spoolarchive
>>
>> Now you need to run an initial cyr_expire. This will take a long
>> time, but it should be able to use hardlinks to move the files - it's
>> using cyrus_copyfile.>>
>> Once cyr_expire has finished an most of your email is moved into spool-
>> archive, shut down cyrus.>>
>> mv /mnt/bigdisk/spool /mnt/smalldisk/spool
>>
>> And set partition-default to /mnt/smalldisk/spool
>>
>> That way your downtime is only while the small remaining spool gets
>> moved to the other disk.>>
>> Bron.
>>
>>
>> On Sun, 5 Nov 2017, at 02:58, Nic Bernstein wrote:
>>> Thanks much to you both for your comments and suggestions. We had>>> already considered creating a temporary "staging" partition and
>>> shuffling mailboxes around, as Michael discussed, but have the same>>> reservations about it. Since we're dealing with nearly 6TB of data,>>> most of it old, this scheme would introduce considerable
>>> disruption to a>>> very active mail system. We have a hard time getting a two hour
>>> maintenance window, and this would take days!
>>>
>>> Bron, other Fastmailers, any thoughts??
>>> -nic
>>>
>>> On 11/03/2017 11:20 AM, Michael Menge wrote:
>>>> Hi,
>>>>
>>>> Quoting Reinaldo Gil Lima de Carvalho <reinaldoc at gmail.com>:
>>>>
>>>>> I think that singleinstancestore (message hard links) will not
>>>>> survive when
>>>>> moving from one partition to the other and storage total size will>>>>> increase
>>>>> significantly.
>>>>>
>>>>
>>>> thanks for the hint. This was not a problem while migration to the>>>> meta-data partition,
>>>> as the mails stayed on the same partition (as in file-system
>>>> and not>>>> cyrus-partition)
>>>> and only hardlinks where change at all.
>>>>
>>>> So one more reason for an other migration path.
>>>>
>>>>
>>>>>
>>>>> 2017-11-03 12:22 GMT-03:00 Michael Menge
>>>>> <michael.menge at zdv.uni-tuebingen.de
>>>>>> :
>>>>>
>>>>>> Hi Nic,
>>>>>>
>>>>>> Quoting Nic Bernstein <nic at onlight.com>:
>>>>>>
>>>>>> Friends,
>>>>>>> I have a client with Cyrus 2.5.10 installed. Last year we
>>>>>>> migrated>>>>>>> their
>>>>>>> old 2.3.18 system to 2.5.10, with an eye towards an eventual
>>>>>>> move to>>>>>>> 3.0.x. Based on Bron's most excellent email of last year,
>>>>>>> ([Subject: Cyrus
>>>>>>> database and file usage data] from Cyrus Devel of 8 January
>>>>>>> 2016)>>>>>>> we used a
>>>>>>> tiered layout for the storage:
>>>>>>>
>>>>>>> The main categories are:
>>>>>>>
>>>>>>> * Config directory (ssd) [/var/lib/imap]
>>>>>>> o sieve
>>>>>>> o seen
>>>>>>> o sub
>>>>>>> o quota
>>>>>>> o mailboxes.db
>>>>>>> o annotations.db
>>>>>>> * Ephemeral [/var/run/cyrus -- in tmpfs]
>>>>>>> o tls_sessions.db
>>>>>>> o deliver.db
>>>>>>> o statuscache.db
>>>>>>> o proc (directory)
>>>>>>> o lock (directory)
>>>>>>> * Mailbox data [typical 2.5.X usage]
>>>>>>> o Meta-data (ssd)
>>>>>>> + header
>>>>>>> + index
>>>>>>> + cache
>>>>>>> + expunge
>>>>>>> + squat (search index)
>>>>>>> + annotations
>>>>>>> o Spool data (disk: raidX)
>>>>>>> + messages (rfc822 blobs)
>>>>>>>
>>>>>>> We sized the Fast SSD pool (this is three-drive mirrors on ZFS)
>>>>>>> to be>>>>>>> extra large, so it could eventually handle "Hot" data, and left>>>>>>> about 300GB
>>>>>>> free there. Data, on spinning media, is currently 5.74TB with
>>>>>>> 4.8TB free
>>>>>>> (RAID10). Metadata is 35GB and /var/lib/imap is 8GB, all of
>>>>>>> which>>>>>>> is in
>>>>>>> the Fast pool.
>>>>>>>
>>>>>>> Now the client is ready to take the dive into v3.0, and I'm
>>>>>>> trying to>>>>>>> figure out how to put "archive" operation in effect.
>>>>>>>
>>>>>>> I have read the documentation (hell, I wrote most of it) and
>>>>>>> understand
>>>>>>> the settings, but what I cannot quite wrap my brain around is
>>>>>>> this:>>>>>>> There
>>>>>>> is already all of this data sitting in all of these data
>>>>>>> partitions>>>>>>> (we use
>>>>>>> a total of 34 separate partitions each for data & metadata) so
>>>>>>> how>>>>>>> do I
>>>>>>> make the transition to separate archive partitions, since all
>>>>>>> that>>>>>>> data is
>>>>>>> on the "slow" drives? Can I just reassign all of the current
>>>>>>> data>>>>>>> partitions to archivedata partitions, define the new set of
>>>>>>> "Hot" data>>>>>>> partitions on the Fast pool, and let 'er rip, or what?
>>>>>>>
>>>>>>> I promise, if you tell me, I'll write it up as real
>>>>>>> documentation. :-)>>>>>>>
>>>>>>>
>>>>>> We are interested in such a migration too. Our fallback plan,
>>>>>> if we>>>>>> don't
>>>>>> find a
>>>>>> better way to do it is, do use the same method as we introduced
>>>>>> the ssd>>>>>> meta-data
>>>>>> partition.
>>>>>>
>>>>>>
>>>>>> 1. We created a new partition in our cyrus configuration,
>>>>>> 2. we moved moved the accounts from one partition to the other
>>>>>> one>>>>>> by one.
>>>>>> 3. (this will be new for the archive partition) run cyrus expire
>>>>>> to>>>>>> move
>>>>>> the old mails back to the slow disks.
>>>>>>
>>>>>> This method will have two downsides.
>>>>>> 1. we have to copy all mails to the fast storage, and move the
>>>>>> old>>>>>> mails
>>>>>> back to the slow storage. So we have to move most of the mails>>>>>> twice.
>>>>>> 2. the path of the old mail will change so they will be stored
>>>>>> again in>>>>>> our file based backup
>>>>>>
>>>>>> so a method without these downsides will be appreciated
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Michael
>>>>>>
>>>>>> ------------------------------------------------------------
>>>>>> --------------------
>>>>>> M.Menge Tel.: (49) 7071/29-70316
>>>>>> Universität Tübingen Fax.: (49) 7071/29-5912
>>>>>> Zentrum für Datenverarbeitung mail:
>>>>>> michael.menge at zdv.uni-tuebingen.de
>>>>>> Wächterstraße 76
>>>>>> 72074 Tübingen
>>>>>>
>>>>>> ----
>>>>>> Cyrus Home Page: http://www.cyrusimap.org/
>>>>>> List Archives/Info:
>>>>>> http://lists.andrew.cmu.edu/pipermail/info-cyrus/>>>>>> To Unsubscribe:
>>>>>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>>>>
>>>>
>>>>
>>>> -------------------------------------------------------------------
>>>> ------------->>>>
>>>> M.Menge Tel.: (49) 7071/29-70316
>>>> Universität Tübingen Fax.: (49) 7071/29-5912
>>>> Zentrum für Datenverarbeitung mail:
>>>> michael.menge at zdv.uni-tuebingen.de
>>>> Wächterstraße 76
>>>> 72074 Tübingen
>>>>
>>>> ----
>>>> Cyrus Home Page: http://www.cyrusimap.org/
>>>> List Archives/Info:
>>>> http://lists.andrew.cmu.edu/pipermail/info-cyrus/>>>> To Unsubscribe:
>>>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>>>
>>> --
>>> Nic Bernstein nic at onlight.com
>>> Onlight Inc. www.onlight.com
>>> 6525 W Bluemound Rd., Ste 24 v. 414.272.4477
>>> Milwaukee, Wisconsin 53213-4073 f. 414.290.0335
>>>
>>> ----
>>> Cyrus Home Page: http://www.cyrusimap.org/
>>> List Archives/Info:
>>> http://lists.andrew.cmu.edu/pipermail/info-cyrus/>>> To Unsubscribe:
>>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>>
>> --
>> Bron Gondwana, CEO, FastMail Pty Ltd
>> brong at fastmailteam.com
>>
>>
>>
>>
>> ---- Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info:
>> http://lists.andrew.cmu.edu/pipermail/info-cyrus/ To Unsubscribe:
>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus>
> -- Nic Bernstein nic at onlight.com Onlight
> Inc. www.onlight.com 6525 W Bluemound
> Rd., Ste 24 v. 414.272.4477 Milwaukee, Wisconsin 53213-4073
> f. 414.290.0335
>> ----
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
--
Bron Gondwana, CEO, FastMail Pty Ltd
brong at fastmailteam.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20171110/57bfcf62/attachment-0001.html>
More information about the Info-cyrus
mailing list