ZFS doing insane I/O reads
Ram
ram at netcore.co.in
Tue Feb 28 01:13:42 EST 2012
On 02/27/2012 04:16 PM, Eric Luyten wrote:
> On Mon, February 27, 2012 11:10 am, Ram wrote:
>> I just deployed zfs on my newer cyrus servers.
>> These servers get less than 2000 mails per hour and around 400
>> concurrent pop/imap connections
>>
>>
>> I have seen that even if there is no incoming pop or imap connection
>> still there is large amount of READ happenning on the zfs partitions. Is this
>> normal behaviour for an imap server. Iostat shows sometimes upto 2000 TPS
>>
>>
>> The reads are infact more than 10x of what writes are. I am afraid I
>> will be trashing the harddisk. Do I need to tune ZFS specially for cyrus ?
>>
>>
>>
>> This is the typical zpool iostat output
>>
>>
>> zpool iostat 1
>> pool alloc free read write read write
>> ---------- ----- ----- ----- ----- ----- -----
>> imap 145G 655G 418 58 18.0M 1.78M
>> imap 146G 654G 258 118 8.28M 960K
>> imap 145G 655G 447 146 19.4M 4.37M
>> imap 145G 655G 413 32 19.4M 1.46M
>> imap 145G 655G 339 4 14.8M 20.0K
>> imap 145G 655G 341 40 15.7M 755K
>> imap 145G 655G 305 10 15.0M 55.9K
>> imap 145G 655G 328 12 14.8M 136K
>
> Ram,
>
> We have a single Cyrus server about ten times as busy as yours with four ZFS
> pools (EMC Celerra iSCSI SAN) for message stores ; all the databases, quota
> and seen information are on an internal server SSD based (mirror) pool.
> We also have a few GB of SSD based ZIL (synchronous write cache) per pool.
>
>
> Here is our 'zpool iostat 1' output :
>
> capacity operations bandwidth
> pool alloc free read write read write
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 22 32 422K 286K
> cpool2 1.18T 2.66T 29 45 578K 459K
> cpool3 1.00T 2.84T 24 34 456K 314K
> cpool4 993G 2.87T 25 35 455K 328K
> ssd 7.49G 22.3G 4 35 17.2K 708K
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 45 16 670K 759K
> cpool2 1.18T 2.66T 47 25 565K 603K
> cpool3 1.00T 2.84T 33 13 410K 483K
> cpool4 993G 2.87T 12 8 525K 244K
> ssd 7.49G 22.3G 13 210 49.4K 10.8M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 20 22 77.9K 2.15M
> cpool2 1.18T 2.66T 25 4 937K 128K
> cpool3 1.00T 2.84T 20 91 324K 11.0M
> cpool4 993G 2.87T 17 13 844K 83.9K
> ssd 7.49G 22.3G 6 237 20.0K 20.9M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 0 0 1023 0
> cpool2 1.18T 2.66T 12 21 146K 1.26M
> cpool3 1.00T 2.84T 8 26 46.5K 2.28M
> cpool4 993G 2.87T 11 4 353K 24.0K
> ssd 7.49G 22.3G 17 135 99.4K 8.12M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 4 0 80.9K 4.00K
> cpool2 1.18T 2.66T 7 6 133K 28.0K
> cpool3 1.00T 2.84T 6 0 16.5K 4.00K
> cpool4 993G 2.87T 4 4 149K 20.0K
> ssd 7.49G 22.3G 9 76 51.0K 4.24M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 12 0 269K 4.00K
> cpool2 1.18T 2.66T 19 0 327K 4.00K
> cpool3 1.00T 2.84T 7 3 11.0K 16.0K
> cpool4 993G 2.87T 5 95 167K 11.4M
> ssd 7.49G 22.3G 4 226 17.5K 25.2M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 14 20 311K 1.22M
> cpool2 1.18T 2.66T 19 15 85.4K 1.39M
> cpool3 1.00T 2.84T 6 6 5.49K 40.0K
> cpool4 993G 2.87T 4 15 17.0K 1.70M
> ssd 7.49G 22.3G 6 151 21.5K 13.1M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 56 15 2.11M 559K
> cpool2 1.18T 2.66T 13 7 18.5K 32.0K
> cpool3 1.00T 2.84T 5 4 54.4K 392K
> cpool4 993G 2.87T 17 2 66.4K 136K
> ssd 7.49G 22.3G 6 109 45.9K 8.29M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 38 19 228K 1.89M
> cpool2 1.18T 2.66T 29 11 160K 300K
> cpool3 1.00T 2.84T 4 4 11.5K 24.0K
> cpool4 993G 2.87T 9 8 31.5K 56.0K
> ssd 7.49G 22.3G 12 150 46.0K 12.1M
> ---------- ----- ----- ----- ----- ----- -----
> cpool1 901G 2.96T 32 1 106K 256K
> cpool2 1.18T 2.66T 46 5 692K 95.9K
> cpool3 1.00T 2.84T 7 13 189K 324K
> cpool4 993G 2.87T 4 0 29.0K 4.00K
> ssd 7.49G 22.3G 25 96 149K 8.08M
> ---------- ----- ----- ----- ----- ----- -----
>
>
> Q1 : How much RAM does your server have ?
> Solaris 10 uses all remaining free RAM as ZFS read cache.
> We have 72 GB of RAM in our server.
> Or are you using ZFS on e.g. BSD ?
>
>
> Q2 : What is your 'fsstat zfs 1' output ?
>
This is a 16GB Ram server running Linux Centos 5.5 64 bit.
There seems to be something definitely wrong .. because all the memory
on the machine is free.
(I dont seem to have fsstat on my server .. I will have to get it
compiled )
Do I need to configure zfs to use some memory chunk and thrash the
Drives less
Thanks
Ram
More information about the Info-cyrus
mailing list