unsubscribe
Sabine GOUDARD
sabine.goudard at st-etienne.archi.fr
Mon Jun 18 08:02:41 EDT 2018
----- Mail original -----
De: info-cyrus-request at lists.andrew.cmu.edu
À: info-cyrus at lists.andrew.cmu.edu
Envoyé: Lundi 18 Juin 2018 11:19:32
Objet: Info-cyrus Digest, Vol 155, Issue 25
Send Info-cyrus mailing list submissions to
info-cyrus at lists.andrew.cmu.edu
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
or, via email, send a message with subject or body 'help' to
info-cyrus-request at lists.andrew.cmu.edu
You can reach the person managing the list at
info-cyrus-owner at lists.andrew.cmu.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Info-cyrus digest..."
Today's Topics:
1. Re: Solaris (11) support (ellie timoney)
2. Re: XLIST, special-use mailboxes (ellie timoney)
3. Restart from....? (DRP) (Albert Shih)
4. Re: Restart from....? (DRP) (Niels Dettenbach)
5. Re: Restart from....? (DRP) (Albert Shih)
6. Re: Restart from....? (DRP) (Niels Dettenbach)
----------------------------------------------------------------------
Message: 1
Date: Mon, 18 Jun 2018 13:25:06 +1000
From: ellie timoney <ellie at fastmail.com>
To: info-cyrus at lists.andrew.cmu.edu
Subject: Re: Solaris (11) support
Message-ID:
<1529292306.2200259.1411257696.4A2DFA50 at webmail.messagingengine.com>
Content-Type: text/plain; charset="utf-8"
Hi Jean-Christophe,
On Fri, Jun 15, 2018, at 5:49 PM, Jean-Christophe Delaye wrote:
> So this is why the first part of my
> question was to known if there are many running murder systems running
> on Solaris (11) and why I can't find specifics notes about
> compiling/installing Cyrus imapd on this operating system.
The main contributors to Cyrus development are not running Solaris, either personally or organisationally, so Solaris support doesn't get a lot of direct attention.
There are a few people out there running Cyrus on Solaris (not sure if they're using murder or not). They usually pop up on the list with Solaris-compatibility issues/patches not long after new releases where we've accidentally broken something on Solaris, which we greatly appreciate! :)
I have no access to Solaris, and so additional insight to offer. But I'd be very happy to accept/merge patches to code/documentation for you if you get things working properly.
Cheers,
ellie
------------------------------
Message: 2
Date: Mon, 18 Jun 2018 14:03:47 +1000
From: ellie timoney <ellie at fastmail.com>
To: info-cyrus at lists.andrew.cmu.edu
Subject: Re: XLIST, special-use mailboxes
Message-ID:
<1529294627.2212334.1411265856.48429E9E at webmail.messagingengine.com>
Content-Type: text/plain; charset="utf-8"
Hi Paul,
On Fri, Jun 15, 2018, at 6:26 PM, Paul van der Vlis wrote:
> So xlist is removed in 2.5, and the new implementation is not there?
> So no support in Cyrus 2.5.10.
So, there's a few parts being mixed up here.
* The RFC 6154 IMAP LIST Extension for Special-Use Mailboxes is supported in 2.5. My understanding is that this is the standardisation (into the LIST, CREATE, etc commands) of the functionality previously known as "XLIST".
* The XLIST command continues to work in Cyrus 2.5, but it's basically just a synonym for LIST. Speculatively, modern clients are probably using the RFC 6154 LIST extensions these days, but we continue to accept the XLIST spelling for backward compatibility to older, pre-6154 clients.
* The `specialusealways: 1` setting exists to make Cyrus always include special-use information in its LIST responses. The default is to only include it when it's requested. Enabling this setting is not strictly standard compliant, but might be helpful if your users have buggy clients that want special-use information but do not ask for it.
* The `xlist-attribute: mailbox` settings in imapd.conf no longer exist in Cyrus 2.5. This was used by the 2.4 XLIST implementation to provide special-use information. Since RFC 6154 support was added in 2.5, special-use attributes are first class attributes on mailboxes, and are no longer faked in this way.
* The `xlist-attribute: mailbox` settings have reappeared in Cyrus 3.0 as a way of providing information to the autocreate mechanism, such that autocreated mailboxes could have their special-use attribute applied during creation. The choice to re-use the "xlist-" name was to mainly to aid migration from 2.4 systems to 3.0, but it does cause confusion! The 3.0 version of the "xlist-" names may be deprecated and renamed to something like "autocreate-specialuse-" in a future major release, but this has not yet happened.
Generally, to avoid confusion, I'd suggest referring to the special-use attributes and support for them as "special-use attributes", or "RFC 6154". The "XLIST" name referred to a specific pre-standardisation implementation of a similar idea that no longer exists.
Cheers,
ellie
------------------------------
Message: 3
Date: Mon, 18 Jun 2018 09:46:02 +0200
From: Albert Shih <Albert.Shih at obspm.fr>
To: info-cyrus at lists.andrew.cmu.edu
Subject: Restart from....? (DRP)
Message-ID: <20180618074602.GA1487 at io.chezmoi.fr>
Content-Type: text/plain; charset=iso-8859-1
Hi everyone
I've a question about DRP (Disaster Recovery Plan), what's the easiest (=
fastest) way to rebuild a server (with the data) after a server ? disappear ? (fire,
water flood, etc.).
I see three way to ? backup ?? the data :
Replication,
Backup service (inside cyrusimapd 3),
Filesystem backup (whatever the technic)
For replication my concern is the speed of the replication, the main server
(I got only one server) got lots of RAM, got SSD, and SAS disk, the
replication got SATA disks (lots of RAM too). When I check I think
everything are indeed replicated on the ? slave ? but with some delays
(1/2 days).
What do you think ? What's your DRP ?
Regards.
JAS
--
Albert SHIH
Observatoire de Paris
xmpp: jas at obspm.fr
Heure local/Local time:
Mon Jun 18 09:37:59 CEST 2018
------------------------------
Message: 4
Date: Mon, 18 Jun 2018 10:22:03 +0200
From: Niels Dettenbach <nd at syndicat.com>
To: info-cyrus at lists.andrew.cmu.edu
Subject: Re: Restart from....? (DRP)
Message-ID: <3526300.x6mG1XRrD3 at gongo>
Content-Type: text/plain; charset="us-ascii"
Am Montag, 18. Juni 2018, 09:46:02 CEST schrieb Albert Shih:
> What do you think ? What's your DRP ?
I shoot snapshots from the underlying FS of the spool partition(s) and the
main DB files (skiplist) - incl. (incremental) filesystem dumps of them.
in a desaster scenario it usually works well to reinstantiate the last
snapshot and start the server(s) with a forced full reconstruct run. But this
only offers "low resolution" recovery (mails / mods since last snapshot are
gone then).
Beside this we run daily FS backups (incl. cyrus DB dumps) which allows us to
reinstall from zero (i.e. autmated by ansible or similiar) on system and FS
level.
I'm a bit new to the new included backup mechs and repo features in cyrus 3
and interested in experiences with setups, allowing a efficient "lossless"
recovery too.
best regards,
niels.
--
---
Niels Dettenbach
Syndicat IT & Internet
http://www.syndicat.com
PGP: https://syndicat.com/pub_key.asc
---
------------------------------
Message: 5
Date: Mon, 18 Jun 2018 10:48:16 +0200
From: Albert Shih <Albert.Shih at obspm.fr>
To: Niels Dettenbach via Info-cyrus <info-cyrus at lists.andrew.cmu.edu>
Subject: Re: Restart from....? (DRP)
Message-ID: <20180618084816.GA3030 at io.chezmoi.fr>
Content-Type: text/plain; charset=iso-8859-1
Le 18/06/2018 ? 10:22:03+0200, Niels Dettenbach via Info-cyrus a ?crit
> Am Montag, 18. Juni 2018, 09:46:02 CEST schrieb Albert Shih:
> > What do you think ? What's your DRP ?
> I shoot snapshots from the underlying FS of the spool partition(s) and the
> main DB files (skiplist) - incl. (incremental) filesystem dumps of them.
How you do that ?
Because at the beginning my plan was to do both (replication and snapshot).
The problem is currently I'm encounter big issue with the snapshot. I don't
know if this is the right place because I don't know if it's related to
Cyrus, so that's why I didn't talk about at the
first time. But I got a server (Dell PowerEdge, 192Go, 28 mechanicals disk,
2 ssd, 2 SAS (for the OS)).
The system is FreeBSD 11 running on the 2 SAS disk on UFS
The cyrus imap run inside a jail on the 2 ssd ( on zfs pool)
The mailbox and xapian index are on two zfs dataset on a zpool with 28
mechanicals disk.
Everything seem working fine, until I try to send the dataset on other
server. I just cannot send a zfs snapshot from this server to another. If
the dataset are small that's OK, but with the mailbox (~4To) the zfs
command just hang after 10-40 minutes during 1-10 minutes, come back work
during 1 or 2 hours and hang again etc.
> in a desaster scenario it usually works well to reinstantiate the last
> snapshot and start the server(s) with a forced full reconstruct run. But this
> only offers "low resolution" recovery (mails / mods since last snapshot are
> gone then).
>
> Beside this we run daily FS backups (incl. cyrus DB dumps) which allows us to
How you do that ? Because cyrus got a lot of DB....
> reinstall from zero (i.e. autmated by ansible or similiar) on system and FS
Yes we using puppet, reinstalling the system and configuration are easy.
The hard part are the data.
> level.
>
> I'm a bit new to the new included backup mechs and repo features in cyrus 3
> and interested in experiences with setups, allowing a efficient "lossless"
> recovery too.
I'm a bit new with cyrus so... ;-) All I can say is the replication seem to
works well. I got
master --> first slave (same room) --> second slave (distant datacenter).
I'll will try today to see if it's easy or not to restart with a slave by
cloning it.
Best regards.
--
Albert SHIH
DIO b?timent 15
Observatoire de Paris
xmpp: jas at obspm.fr
Heure local/Local time:
Mon Jun 18 10:36:19 CEST 2018
------------------------------
Message: 6
Date: Mon, 18 Jun 2018 11:19:27 +0200
From: Niels Dettenbach <nd at syndicat.com>
To: info-cyrus at lists.andrew.cmu.edu
Subject: Re: Restart from....? (DRP)
Message-ID: <2419012.xuhsTDA34a at gongo>
Content-Type: text/plain; charset="iso-8859-1"
Am Montag, 18. Juni 2018, 10:48:16 CEST schrieb Albert Shih:
> Everything seem working fine, until I try to send the dataset on other
> server. I just cannot send a zfs snapshot from this server to another. If
> the dataset are small that's OK, but with the mailbox (~4To) the zfs
> command just hang after 10-40 minutes during 1-10 minutes, come back work
> during 1 or 2 hours and hang again etc.
Ahh,
yes,
we have local snapshots and a second ZFS machine for ZFS replication (incl.
snapshots) which is running in the background - so the snapshots are done
locally and send "in backround" over the network to another location. if just
the machine but not the disks break, we can sue the local disk set within a
new machine to start over. if the whole sites burn down, the disks (or
temporarily by iSCSI, NFS or samba) could be used to start over in/with new
hardware.
in smaller on-site setups we use i.e. FreeNAS as the "FreeBSD distribution"
for easier management (even trough less skilled IT persons). this allows us
to run jails with i.e. cyrus (capsulated and backed up too) which can be
handled "by click" btw. this means the cyrus (jails) are running in ZFS too.
> Yes we using puppet, reinstalling the system and configuration are easy.
> The hard part are the data.
this depends from the storage (on network like NAS or SAN or "locally"). by
principle:
- mounting or copying the pool (usually the largest part)
- reimport database
i.e. similiar to:
https://forum.open-xchange.com/showthread.php?3512-Simple-Cyrus-mailbox-migration
or
http://www.monoplan.de/cyrus_imap_migration.html (german)
- reconstruct -f (i use "just" reconstruct -f as this runs the whole pool
too)
then your cyrus should be fine again. the reconstruct -f (forced) mail reads
the pool data (folder by folder, mail by mail) and "fixes" any inconsistencies
with the "database" (due to the "hot" state of the backup - not shutted down
for backup). the cyrus databases seems quite robust to that (compared to most
other database systems).
> I'm a bit new with cyrus so... All I can say is the replication seem to
> works well. I got
thanks for this info. will try cyrus replication soon for testing purposes.
?)
> I'll will try today to see if it's easy or not to restart with a slave by
> cloning it.
i'm new to replication, but afaik it should be easy to make a slave to a new
master just by reconfiguration /cyrus.conf, some imapd.conf flags) i assume.
hth a bit,
good luck,
niels.
--
---
Niels Dettenbach
Syndicat IT & Internet
http://www.syndicat.com
PGP: https://syndicat.com/pub_key.asc
---
------------------------------
Subject: Digest Footer
_______________________________________________
Info-cyrus mailing list
Info-cyrus at lists.andrew.cmu.edu
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
------------------------------
End of Info-cyrus Digest, Vol 155, Issue 25
*******************************************
More information about the Info-cyrus
mailing list