Cyrus with a NFS storage. random DBERROR

Michael Menge michael.menge at zdv.uni-tuebingen.de
Tue Jun 5 02:25:50 EDT 2007


Hi,

after the problem with the wiki was solved, i added a summery about
CyrusCluster http://cyrusimap.web.cmu.edu/twiki/bin/view/Cyrus/CyrusCluster .
Please feel free to add infos about your experience with NFS

Quoting Paul Dekkers <Paul.Dekkers at surfnet.nl>:

> Hi,
>
> Took me a while before I found the time to try the meta-partitions and
> NFS backed (data-)partitions, but:
>
> Dmitriy Kirhlarov wrote:
>> On Thu, May 03, 2007 at 05:08:52PM +0200, Paul Dekkers wrote:
>>
>>> I recently tried to use NFS (on a RedHat client, both to a NetApp filer
>>> as well as a RedHat NFS server) and I'll share my experiences:
>>>
>>> Michael Menge wrote:
>>>
>>>> Cyrus has 2 problems with NFS.
>>>>
>>>> 1. Cyrus depends on filesystem locking. NFS-4 should have solved this
>>>> problem
>>>> but i have not tested it.
>>>>
>>>> 2. BerkleyDB uses shared Memory which does not work accros multiple
>>>> servers.
>>>>
>>> I used skiplist in the tests (default with Simon's RPM), and initially
>>> just used NFSv3 (and I also tested NFSv4): as long as I mounted with the
>>> -o nolock option it actually worked quite well (also on NFSv3). The
>>> performance was even better with the NetApp as target than with a local
>>> filesystem (and NFSv3 was faster than v4).
>>>
>>> The nolock options does not disable locking (as I understand it) for the
>>> filesystem, it just disables locking over NFS, so other nodes won't have
>>> the same file locked. (Correct me if I'm wrong.) My intention was not to
>>> have an active-active setup, so in that regard this might not be that
>>> bad. Not sure what other catches there are though.
>>>
>>
>> Are you try metapartition* options? If you don't need active-active
>> setup it can be useful.
>>
>
> I didn't try metapartitions with my "-o nolock" experiment (which of
> course doesn't work with active-active either), but now I did another
> experiment with regular NFS locking (no special mount-options) and a
> metapartition for every type of metadata (metapartition_files: header
> index cache expunge squat).
>
> I'm glad to say that this seems to work quite well! Similar to the "-o
> nolock", actually, but it sounds more solid without the "tweaking".
> We use NetApp as NFS filer, and it actually seems to perform a bit
> better than our (this) internal RAID5, load is similar, ... and
> fortunally no errors from the imaptest.
>
> It sounds like this could work. But I'm not sure about the Cyrus
> internals if there are any catches; Ken (or someone else), could this be
> considered safe? If it is safe, I'd prefer to use NFS because
> performance is similar (or better) and the filers are more reliable than
> our RAID5 setup. (I won't go into details, but it's basically a
> physically separated RAID-1 set of drives in RAID-6-ish.)
>
> (Performance-wise I only tried small folders, but as soon as the
> metadata is cached, I think there a not a lot of directory reads when a
> folder is opened, so that doesn't really matter... right?)
>
>>> I stressed the setup with the imaptest tool from Dovecot, I saw problems
>>> with that in the past (also with NFSv3 and v4, but in combination with
>>> Cyrus 2.2 and I'm not sure if I tried nolock), now it seemed to do just
>>> fine. Only NFSv4 does not seem to be the answer, it seems that -o nolock
>>> is (on Linux as client).
>>>
>>> I'm very hesitant to put this into production, I just wanted to do some
>>> more tests and ask others after that if they think this is wise or
>>> not... I couldn't find the time to do more tests... (like see how RedHat
>>> 5 behaves instead of RedHat 4, if the tric also works on FreeBSD, if I
>>> can make it fail one way or another... suggestions always welcome...)
>>>
>
> I still have to try how RedHat 5 and FreeBSD behave,
>
> Paul
>
> ----
> Cyrus Home Page: http://cyrusimap.web.cmu.edu/
> Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
> List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
>



--------------------------------------------------------------------------------
M.Menge                                 Tel.: (49) 7071/29-70316
Universitaet Tuebingen                  Fax.: (49) 7071/29-5912
Zentrum fuer Datenverarbeitung          mail:  
michael.menge at zdv.uni-tuebingen.de
Waechterstrasse 76
72074 Tuebingen
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5469 bytes
Desc: S/MIME krytographische Unterschrift
Url : http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20070605/5ee90e5f/smime-0001.bin


More information about the Info-cyrus mailing list