Cyrus & ZFS performance

Vincent Fox vbfox at ucdavis.edu
Thu Jul 5 12:10:51 EDT 2007


Dale,  I hope you have the time to supply a couple more numbers.

What do your sar and zpool iostat numbers look like?

More importantly what does iostat -cxn 5 look like during peak?
For us this is 1100-1300 hours hitting about 40%. Of course this
is summertime usage so things are a bit slack here with most
students being home, not so much daily class-related chitchat.

If this 3511 setup is yielding very similar numbers we may switch
array products during our next buildout.

Dale Ghent wrote:
> On Jul 4, 2007, at 12:59 PM, Vincent Fox wrote:
>
>> Sun recommends against the 3511 in most literature I read, saying 
>> that the SATA drives
>> are slower and not going to handle as much IOPS loading.  But they 
>> are working out okay
>> for you?  Perhaps it's just vendor upsell to the more expensive 
>> 3510FC....
>
> It's upselling. These 3511s (two units, each unit comprising of a 3511 
> with dual controllers and an attached 3511 JBOD, the head and JBOS 
> stocked with 250GB and 500GB drives respectively) were recently 
> decommissioned from database work and did that job well for 1.5 years. 
> The reliability warnings given by Sun and others are just one way they 
> try to urge you to go with with FC drives... half the capacity at 
> twice the cost. Perhaps more performance with FC, yes, but not 
> persuasively faster for our purposes at least.
>
>> We are actually doing SAN MultiPath, so hardware paranoia first, and 
>> then ZFS on top
>> of that for even more.
>
> Similar here, as well. Each of our X4100M2s have a two port Qlogic 
> 2462 card, with each port connected to a separate switch. Each of our 
> 3511s are connected to two switches as well, with their LUNs being 
> advertised out of all host ports. mpxio does its job well here. Each 
> 3511 is also in separate data centers in separate buildings, with my 
> SAN covering it all... with ZFS mirroring, we essentially have 
> real-time replication between two geographically distinct places. If 
> one data center were to be lost, everyone's mail spool is still fine 
> in another building and ZFS on the Cyrus servers just sees that as one 
> side of the mirror going down.
>
> /dale
>
> -- 
> Dale Ghent
> Specialist, Storage and UNIX Systems
> UMBC - Office of Information Technology
> ECS 201 - x51705
>
>
>



More information about the Info-cyrus mailing list