Adding archiving to an existing Cyrus installation

Nic Bernstein nic at onlight.com
Sat Nov 4 11:58:46 EDT 2017


Thanks much to you  both for your comments and suggestions.  We had 
already considered creating a temporary "staging" partition and 
shuffling mailboxes around, as Michael discussed, but have the same 
reservations about it.  Since we're dealing with nearly 6TB of data, 
most of it old, this scheme would introduce considerable disruption to a 
very active mail system.  We have a hard time getting a two hour 
maintenance window, and this would take days!

Bron, other Fastmailers, any thoughts??
     -nic

On 11/03/2017 11:20 AM, Michael Menge wrote:
> Hi,
>
> Quoting Reinaldo Gil Lima de Carvalho <reinaldoc at gmail.com>:
>
>> I think that singleinstancestore (message hard links) will not 
>> survive when
>> moving from one partition to the other and storage total size will 
>> increase
>> significantly.
>>
>
> thanks for the hint. This was not a problem while migration to the 
> meta-data partition,
> as the mails stayed on the same partition (as in file-system and not 
> cyrus-partition)
> and only hardlinks where change at all.
>
> So one more reason for an other migration path.
>
>
>>
>> 2017-11-03 12:22 GMT-03:00 Michael Menge 
>> <michael.menge at zdv.uni-tuebingen.de
>>> :
>>
>>> Hi Nic,
>>>
>>> Quoting Nic Bernstein <nic at onlight.com>:
>>>
>>> Friends,
>>>> I have a client with Cyrus 2.5.10 installed.  Last year we migrated 
>>>> their
>>>> old 2.3.18 system to 2.5.10, with an eye towards an eventual move to
>>>> 3.0.x.  Based on Bron's most excellent email of last year, 
>>>> ([Subject: Cyrus
>>>> database and file usage data] from Cyrus Devel of 8 January 2016) 
>>>> we used a
>>>> tiered layout for the storage:
>>>>
>>>> The main categories are:
>>>>
>>>>  * Config directory (ssd) [/var/lib/imap]
>>>>      o sieve
>>>>      o seen
>>>>      o sub
>>>>      o quota
>>>>      o mailboxes.db
>>>>      o annotations.db
>>>>  * Ephemeral [/var/run/cyrus -- in tmpfs]
>>>>      o tls_sessions.db
>>>>      o deliver.db
>>>>      o statuscache.db
>>>>      o proc (directory)
>>>>      o lock (directory)
>>>>  * Mailbox data [typical 2.5.X usage]
>>>>      o Meta-data (ssd)
>>>>          + header
>>>>          + index
>>>>          + cache
>>>>          + expunge
>>>>          + squat (search index)
>>>>          + annotations
>>>>      o Spool data (disk: raidX)
>>>>          + messages (rfc822 blobs)
>>>>
>>>> We sized the Fast SSD pool (this is three-drive mirrors on ZFS) to be
>>>> extra large, so it could eventually handle "Hot" data, and left 
>>>> about 300GB
>>>> free there.  Data, on spinning media, is currently 5.74TB with 
>>>> 4.8TB free
>>>> (RAID10).  Metadata is 35GB and /var/lib/imap is 8GB, all of which 
>>>> is in
>>>> the Fast pool.
>>>>
>>>> Now the client is ready to take the dive into v3.0, and I'm trying to
>>>> figure out how to put "archive" operation in effect.
>>>>
>>>> I have read the documentation (hell, I wrote most of it) and 
>>>> understand
>>>> the settings, but what I cannot quite wrap my brain around is this: 
>>>> There
>>>> is already all of this data sitting in all of these data partitions 
>>>> (we use
>>>> a total of 34 separate partitions each for data & metadata) so how 
>>>> do I
>>>> make the transition to separate archive partitions, since all that 
>>>> data is
>>>> on the "slow" drives? Can I just reassign all of the current data
>>>> partitions to archivedata partitions, define the new set of "Hot" data
>>>> partitions on the Fast pool, and let 'er rip, or what?
>>>>
>>>> I promise, if you tell me, I'll write it up as real documentation. :-)
>>>>
>>>>
>>> We are interested in such a migration too. Our fallback plan, if we 
>>> don't
>>> find a
>>> better way to do it is, do use the same method as we introduced the ssd
>>> meta-data
>>> partition.
>>>
>>>
>>> 1. We created a new partition in our cyrus configuration,
>>> 2. we moved moved the accounts from one partition to the other one 
>>> by one.
>>> 3. (this will be new for the archive partition) run cyrus expire to 
>>> move
>>> the old mails back to the slow disks.
>>>
>>> This method will have two downsides.
>>> 1. we have to copy all mails to the fast storage, and move the old 
>>> mails
>>>    back to the slow storage. So we have to move most of the mails 
>>> twice.
>>> 2. the path of the old mail will change so they will be stored again in
>>>    our file based backup
>>>
>>> so a method without these downsides will be appreciated
>>>
>>> Regards
>>>
>>>    Michael
>>>
>>> ------------------------------------------------------------
>>> --------------------
>>> M.Menge                                Tel.: (49) 7071/29-70316
>>> Universität Tübingen                   Fax.: (49) 7071/29-5912
>>> Zentrum für Datenverarbeitung          mail:
>>> michael.menge at zdv.uni-tuebingen.de
>>> Wächterstraße 76
>>> 72074 Tübingen
>>>
>>> ----
>>> Cyrus Home Page: http://www.cyrusimap.org/
>>> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>>> To Unsubscribe:
>>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>
>
>
> -------------------------------------------------------------------------------- 
>
> M.Menge                                Tel.: (49) 7071/29-70316
> Universität Tübingen                   Fax.: (49) 7071/29-5912
> Zentrum für Datenverarbeitung          mail: 
> michael.menge at zdv.uni-tuebingen.de
> Wächterstraße 76
> 72074 Tübingen
>
> ----
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus

-- 
Nic Bernstein                             nic at onlight.com
Onlight Inc.                              www.onlight.com
6525 W Bluemound Rd., Ste 24	          v. 414.272.4477
Milwaukee, Wisconsin  53213-4073	  f. 414.290.0335



More information about the Info-cyrus mailing list