Cyrus vs Dovecot
Pascal Gienger
Pascal.Gienger at uni-konstanz.de
Thu Aug 14 05:17:00 EDT 2008
Mathieu Kretchner <mathieu.kretchner at sophia.inria.fr> wrote:
> Ian G Batten a écrit :
>> We have mailboxes.db and the metapartitions on ZFS, along with the zone
>> iteself. The pool is drawn from space on four 10000rpm SAS drives
>> internal to the machine:
To give (hopefully) comparable comparison:
We have our meta files and spool files also on ZFS, with mirrored pools:
# zpool status
pool: cyrus
state: ONLINE
scrub: resilver completed with 0 errors on Sun May 25 12:17:46 2008
config:
NAME STATE READ WRITE
CKSUM
cyrus ONLINE 0 0
0
mirror ONLINE 0 0
0
c6t600D0230006B66680C50AB4F92F61000d0 ONLINE 0 0
0
c6t600D0230006C1C4C0C50BE4DFE511B00d0 ONLINE 0 0
0
errors: No known data errors
pool: mail
state: ONLINE
scrub: resilver completed with 0 errors on Sun May 25 01:05:02 2008
config:
NAME STATE READ WRITE
CKSUM
mail ONLINE 0 0
0
mirror ONLINE 0 0
0
c6t600D0230006B66680C50AB0F36ADF100d0 ONLINE 0 0
0
c6t600D0230006C1C4C0C50BE57396E9F00d0 ONLINE 0 0
0
mirror ONLINE 0 0
0
c6t600D0230006B66680C50AB5675F91300d0 ONLINE 0 0
0
c6t600D0230006C1C4C0C50BE16FF1FE200d0 ONLINE 0 0
0
errors: No known data errors
"cyrus" is our log pool, "mail" our imap spool pool.
IO ist mostly write:
# zpool iostat mail 2
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
mail 2.08T 6.02T 226 163 1.36M 1.67M
mail 2.08T 6.02T 358 10 1.35M 94.4K
mail 2.08T 6.02T 234 599 1.08M 10.0M
mail 2.08T 6.02T 77 0 425K 3.98K
mail 2.08T 6.02T 85 306 484K 3.39M
mail 2.08T 6.02T 95 8 405K 75.6K
mail 2.08T 6.02T 107 6 798K 47.8K
mail 2.08T 6.02T 73 232 281K 2.30M
mail 2.08T 6.02T 77 2 304K 9.95K
mail 2.08T 6.02T 66 469 254K 5.84M
mail 2.08T 6.02T 83 4 409K 17.9K
As with Ian's setup, most read requests are serviced from ARC.
We have BOTH data (meta and spool) on this ZFS pool, however we defined an
extra ZFS filesystem for metadata to make distinct snapshots.
cyrus.header remains on the imap spool partition.
Raw Disk I/O is different as ZFS pulls out up to "recordsize" from disk per
request (128k by default).
Load is 0.47 at the moment, 1355 imapd processes, 10 lmtpd processes
(limited by delivering gateway), 34 pop3d processes.
The machine is a two-processor Opteron (dualcore) machine, so 4 cores are
available. It has 20 GB ram and ARC (zfs) uses:
# kstat zfs:0:arcstats:size
module: zfs instance: 0
name: arcstats class: misc
size 9308832256
9 GB zfs file cache.
Hope this helps you a little bit.
Pascal
More information about the Info-cyrus
mailing list