MS425 and Nutanix/iSCSI

rpn
Here to help

MS425 and Nutanix/iSCSI

Has anyone tried to use the MS425 as a ToR/server access switch?  Specifically, have you tried with Nutanix or another storage endpoint running iSCSI?  If so, I'd be curious to hear your thoughts and whether ultimately it was successful.

 

Thanks!

6 Replies 6
PhilipDAth
Kind of a big deal
Kind of a big deal

I am also interested in this answer.

 

Now the important question - which makes a big difference because of buffering - is all storage related devices going to be connected to 10Gbe, or are some devices going to only be 1Gbe?

rpn
Here to help

Great question.  In this case, all storage is going to be 10Gb, but several legacy hosts are going to be 1Gb.  Uplink will also be 10Gb to core.  Yes, in addition to iSCSI jumbos, some additional buffering capability to accommodate the rate changes could be needed.

 

Here's where I am at on this as of right now.  I've been in touch with a Meraki field SE who pinged his broader team, my team within a large VAR, and solicited feedback on a popular internet forum.  I'm getting zero encouragement to go with the MS425 in this application.  It's a shame because my customer is a perfect target for Meraki in terms of the impact cloud management will have on their efficiency, but I'll have to go with WLAN only for now.  Since they are "IT-lite" they are not a good case for a science experiment.  Hopefully I can move their access layer switching to Meraki next year, but it's not in the budget this year.

 

If anyone pops up this evening that wants to make a case fo the MS425 in a iSCSI ToR application, I'm still all ears, but tomorrow it goes to the customer.

PhilipDAth
Kind of a big deal
Kind of a big deal

Would it be feasible to put 10Gbve NICs in the legacy hosts?  If it is all 10Gbe to 10Gbe I think you will be fine.

PhilipDAth
Kind of a big deal
Kind of a big deal

I have had some feedback on the buffering in the MS425.

 

This is my personal opinion from doing some calculations.

 

If you had a 10Gbe storage platform capable of driving 10Gbe (such as SSD or a lot of RAM for caching) then you would probably get away with one "legacy" storage client working at 1Gb/s.  If your storage platform maxed out around 5Gb/s then you could probably handle two 1Gb/s legacy storage clients.  If you storage platform maxed out at 2Gb/s (perhaps it has a lot of magnetic disks only) you could probably handle 5 legacy 1 Gb/s storage clients.  All other ports, storage or otherwise, would need to be operating at 10Gb/e for the calculations to hold.

 

If you exceed these thresholds then you are likely to suffer packet loss and storage throughput would sink badly.  Ideally you would not want to operate at the edge of the thresholds I have given.

boutdislife
New here

we've got redundant ms425-16 connected iSCSI to a Reduxio. It was a greenfield deployment a few months ago. we havent had any issues that i am aware of. seeing some impressive stats from the Reduxio though.

MRCUR
Kind of a big deal

I've deployed a stack of MS350's running 1Gb iSCSI with EqualLogic devices without any problems. Has worked very well serving the EQL's and the ESXi hosts. 

MRCUR | CMNO #12
Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels