Help on reconstruct - Cyrus2.3.11

Ismaël Tanguy ismael.tanguy at univ-brest.fr
Thu Dec 6 09:50:49 EST 2018


Eric, Egoitz

thank you very much, all your advices are very interesting.

Egoitz, replication and delayed expunged are previewed for the next 
migration.
I'm not sure that cyrus2.3 implements delayed expunged.

Eric, here is a piece of fstab for mounting nfs:

192.168.xxx.xxx:/mailperso-fghijkl/perso /var/spool/imap-fghijkl         
nfs rsize=262144,wsize=262144,hard,intr,proto=tcp
192.168.xxx.xxx:/mailperso-abcde/perso /var/spool/imap-abcde           
nfs rsize=262144,wsize=262144,hard,intr,proto=tcp
192.168.xxx.xxx:/mailperso-mnopq/perso /var/spool/imap-mnopq           
nfs rsize=262144,wsize=262144,hard,intr,proto=tcp
192.168.xxx.xxx/mailperso-rstuvwxyz/perso 
/var/spool/imap-rstuvwxyz       nfs 
rsize=262144,wsize=262144,hard,intr,proto=tcp

Is your one quite the same?

Why don't you want to move to Cyrus2.4, too old?
keeping metadata is really a good idea, but is it possible in Cyrus 2.3?

I have to verify.
Thanks again
Ismaël


Le 06/12/2018 à 11:55, Eric Luyten a écrit :
>
>
> On 06/12/2018 11:17, Ismaël Tanguy wrote:
>>
>> Hello,
>>
>> thanks for your answer.
>> We have been using for more than 10 years Cyrus with NFS because of 
>> the snapshot.
>> Snapshot give a way to restore mail or mailbox.
>> It has worked like a charm until the migration.
>> Now we're stuck on daily mailbox corruption due to this storage.
>>
>
>
> Ismaël,
>
>
>
> In the course of the past years there have been quite a few remarks 
> made on this list regarding succesful use of NFS as a Cyrus mail spool.
>
> You stated yourself that the problems started when switching from one 
> type of NFS server to another.
> You may want to have a close(r) look at the way you mount the NFS 
> share(s) on the Cyrus server(s).
>
>
> I am looking at moving to NFS storage for (approx. 10 TB) Cyrus spool 
> in the course of 2019 but intend to keep the metadata 
> (header/index/cache) on an SSD pool local to the (virtual) Cyrus 
> server. Haven't decided yet whether we'll move from 2.3 to 2.5 or 3.0, 
> certainly not 2.4
>
>
>
> Eric Luyten.
>
>
>
>
>> We're looking, first, to a way to automate safely the reconstruct of 
>> mailbox, ideally keeping the Seen State of mails.
>> In parrellel, we're studying the migration to Cyrus 2.4.17 without 
>> the use of NFS.
>>
>> Cheers,
>> Ismael
>> <http://www.univ-brest.fr>
>>
>>
>> Le 06/12/2018 à 08:21, egoitz at sarenet.es a écrit :
>>> Hi!
>>>
>>> Mate nfs, is no tan appropiate storage for Cyrus. I’d recommend you 
>>> using machine local storage. Using that kind of config won’t success.
>>>
>>> Cheers,
>>>
>>> Egoitz,
>>>
>>> El 5 dic 2018, a las 12:13, Ismaël Tanguy 
>>> <ismael.tanguy at univ-brest.fr <mailto:ismael.tanguy at univ-brest.fr>> 
>>> escribió:
>>>
>>>> Hello, this is a Cyrus 2.3.11 on Centos 5.
>>>> About 5000 users for 10 To.
>>>>
>>>> Mail storage has been moved from NetApp NFS to FluidsFS (aka Dell 
>>>> Compellent NFS).
>>>> Since an update on FluidFS, Imap spool undergoes daily NFS timeouts 
>>>> which leads to corrupt mailboxes.
>>>> Typically, this begins with lines like this in /var/log/messages:
>>>>
>>>> Dec  5 09:54:43 mailhost kernel: lockd: server 192.xxx.xx.xx not responding, timed out
>>>>
>>>> Which is followed by IOERROR for accessed mailboxes during NFS timeout:
>>>>
>>>> Dec  5 09:54:47 mailhost lmtpunix[14542]: IOERROR: locking index for user.xxxx: Input/output error
>>>> Dec  5 09:54:47 mailhost imaps[21999]: IOERROR: locking header for user.xxxx.Sent: Input/output error
>>>> Dec  5 09:54:47 mailhost imaps[26935]: IOERROR: locking index for user.xxxx: Input/output error
>>>> Dec  5 09:54:47 mailhost imaps[24013]: IOERROR: locking index for user.xxxx: Input/output error
>>>> Dec  5 09:54:47 mailhost imaps[15672]: IOERROR: locking index for user.xxxx: Input/output error
>>>> Dec  5 09:54:47 mailhost imaps[3999]: IOERROR: locking index for user.xxxx: Input/output error
>>>> Dec  5 09:54:47 mailhost imaps[30671]: IOERROR: locking index for user.xxxx: Input/output error
>>>>
>>>> ...................
>>>> Around 15 maiboxes are corrupted at each timeouts.
>>>> Manually, we can repair this mailbox:
>>>>
>>>>   * first, we have to delete all cyrus files in mailbox, if not the
>>>>     following reconstruct can be blocked
>>>>   * then, we reconstruct the mailbox (reconstruct -s
>>>>     user.<NAME>.<FOLDER>
>>>>
>>>> The downside of this method is that all messages in the 
>>>> reconstructed folder are marked 'Not seen'.
>>>> To automate this, a Python script has been written, but sometimes 
>>>> not all cyrus files (cyrus.index) are recreated:
>>>>
>>>> Dec  5 01:03:53 mailhost lmtpunix[497]: IOERROR: opening /var/spool/imap/x/user/xxxxxx/cyrus.index: No such file or directory
>>>>
>>>> Timeouts happen about 3 times per day, and cyrus deliver process is 
>>>> blocked when delivering to a corrupted mailbox.
>>>> So my first question is : how can we reconstruct a mailbox without 
>>>> marking mails as not seen?
>>>> And my second question is : why cyrus files are not recreated 
>>>> everytime? Is this due to the -s parameter with reconstruct?
>>>>
>>>> Any help will be appreciated.
>>>>
>>>> Thanks
>>>>
>>>> ------------------
>>>>
>>>> Ismael TANGUY
>>>>
>>>> -- 
>>>> <http://www.univ-brest.fr>
>>>>
>>>>
>>>> ----
>>>> Cyrus Home Page: http://www.cyrusimap.org/
>>>> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>>>> To Unsubscribe:
>>>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>>
>>
>> ----
>> Cyrus Home Page:http://www.cyrusimap.org/
>> List Archives/Info:http://lists.andrew.cmu.edu/pipermail/info-cyrus/
>> To Unsubscribe:
>> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
>
>
> ----
> Cyrus Home Page: http://www.cyrusimap.org/
> List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
> To Unsubscribe:
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20181206/5e3f82ce/attachment.html>


More information about the Info-cyrus mailing list