MS425 Series for Datacenter

Solved
amwjoe
Here to help

MS425 Series for Datacenter

I current have 2 MS225-48 switches in a stacked configuration in our datacenter running a Hyper-V Failover Cluster 20-node setup. We're looking to upgrade these for 10GB switches and the MS425 series caught my eye as being the only Meraki Switches I could achieve the amount of 10GB ports I needed. We have basic L2 needs but looking to upgrade from a 1GB to 10GB setup.

 

I plan to not have them stacked so I don't endure the brief downtime of the entire stack going down during firmware upgrades.

 

I was doing reading and while I really enjoy Meraki switches and haven't really had issues saw others mention to not use them for datacenter use cases (aside from cost) because of speed issues versus other brands Such as the 800Gbps switching capacity.

 

Anyone have speed issues or other issues implementing MS425 Switches in a datacenter environment? 

1 Accepted Solution
cmr
Kind of a big deal
Kind of a big deal

We run a stack on MS355-24Xs in our primary datacentre, at the time we found more 355-24s was cheaper than 425s.  They have performed well, though we did consider separating them into two stacks for the minute of downtime when an upgrade is scheduled.  They perform all the L3 as well for us (excluding DMZs and WAN) 

If my answer solves your problem please click Accept as Solution so others can benefit from it.

View solution in original post

3 Replies 3
cmr
Kind of a big deal
Kind of a big deal

We run a stack on MS355-24Xs in our primary datacentre, at the time we found more 355-24s was cheaper than 425s.  They have performed well, though we did consider separating them into two stacks for the minute of downtime when an upgrade is scheduled.  They perform all the L3 as well for us (excluding DMZs and WAN) 

If my answer solves your problem please click Accept as Solution so others can benefit from it.
GreenMan
Meraki Employee
Meraki Employee

It's not that those switches won't work - and it won't be down to basic speed;   they're non-blocking architectures.  What I would say though is that they're not designed with Data Centre environments in mind.  In the Cisco toolbag, this is what Nexus exists for.  Meraki switches (and other switches designed as user access / aggregation) do not have, for example, very deep buffers for storing traffic destined for over-subscribed output ports.   This tends to happen more frequently with server applications and some DC protocols (e.g. iSCSI off the top of my head) dislike packet loss more than most.   Switches designed for DC though tend to be pricier and more complex, so you pay your money and make a choice.

amwjoe
Here to help

Great, thank you both so much!

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels