Meraki Switches - Break Apart Stacked Switch Setup?

Solved
amwjoe
Here to help

Meraki Switches - Break Apart Stacked Switch Setup?

We have a datacenter setup that's core backbone is setup with 2 MS225-48 Meraki switches in a stack with 10GB stack cables. Each switch has its own 1GB uplink cable to the firewall but of course one of them is discarding packets in the stack setup. All of our servers have redundant ethernet uplinks going to each stack switch.

 

It's been great for us so far, haven't had any issues but I run into troubles when it comes to Meraki's software upgrades with stack setups because it reboots both stack switches at the same time. This causes a brief network outage which generally isn't a big deal but we have a rather large Failover Cluster environment running on it and some VMs will go into "Saved" states after everything recovers post-switch upgrade. 

 

My question is, should I take apart my stack switch setup since I have redundant uplinks on my servers to both switches and will I lose out on any performance? I'm fine with managing these switches manually since in Meraki it's already easy. My main goal is to make maintenance periods much more manageable and no impact to my Cluster environment.

1 Accepted Solution
GIdenJoe
Kind of a big deal
Kind of a big deal

I just want to chime in on the fact you said they are stacked using 10 Gbps.
MS225's do physical stacking and those cables are 40 Gbps each, not 10 Gbps.

 

But to help you with your post:
If you don't use port-channels on your ESXi hosts and just divide your VM's manually over your VMNIC's you would be better off using separate access switches to those hosts.  You could do the separate network like Philip suggests or if you want to keep it in the same network you could do staged upgrade.

 

If you do port channels and want separate switch upgrades then you would not be helped with Meraki switches but you would have needed vPC's on Nexus switches if you like to keep it in classical layer 2 networking.

View solution in original post

9 Replies 9
PhilipDAth
Kind of a big deal
Kind of a big deal

My first choice would be to get an additional switch.  Put it into a separate Meraki network.  Configure that Meraki network to have a different maintenance window to the main network.  Then connect a backup NIC from each ESXi host to it, and configure it for backup only in VMWare.  This switch would uplink to your MS225s.

 

This way the ESXi hosts are always able to talk to each other, and wont go into a saved state.

 

 

With regard to this switch:

  • You could get a single MS425 - a pure 10Gbe switch - and actually use it as the primary with backup going to the MS225.
  • You could get another MS225-24 so you could 10Gbe connect to the main switches.
  • You could get a much cheaper MS120, use it purely for backup, and 1Gbe connect to the MS225s.

 

If you can't do that, then yes you could break your current stack and put them into separate networks (I have done this myself).  You loose LACP and layer 3 routing.  If you aren't using those, or could lice without them, then it is definitely an option.

GIdenJoe
Kind of a big deal
Kind of a big deal

I just want to chime in on the fact you said they are stacked using 10 Gbps.
MS225's do physical stacking and those cables are 40 Gbps each, not 10 Gbps.

 

But to help you with your post:
If you don't use port-channels on your ESXi hosts and just divide your VM's manually over your VMNIC's you would be better off using separate access switches to those hosts.  You could do the separate network like Philip suggests or if you want to keep it in the same network you could do staged upgrade.

 

If you do port channels and want separate switch upgrades then you would not be helped with Meraki switches but you would have needed vPC's on Nexus switches if you like to keep it in classical layer 2 networking.

amwjoe
Here to help

Definitely appreciate both of your input and GldenJoe, appreciate the correction on the stack cables. I misspoke yesterday and my stack cables are indeed 40GB cables. 

 

My environment is actually a Microsoft Hyper-V Failover Cluster but I understand the points you both shared.

 

I will proceed to disassemble the stack switch setup and have them both run as independent access switches along with using staged upgrades to spread out the upgrade times. 

 

I reached out to Support to get the appropriate steps on removing a Stack Switch and they stated it's just a matter of:

1. Deleting the Stack within Switching > Stacked Switches > Checking the box for the stack > Delete Stacks

2. Removing the Stack Cables

 

They said it could be done online without downtime and suggested I take note of my port configurations (descriptions is what I'm mainly focused on). Anyone have real-world experience doing this? 

GIdenJoe
Kind of a big deal
Kind of a big deal

Good luck with that one.  Do mind that now each individual switch needs dual uplinks to the distribution switches to maintain a redundant design.

amwjoe
Here to help

I was relieved that there wouldn't be downtime but want to be sure that others haven't run into issues doing this. 

 

Yes absolutely. I do have my Live Migration switch I'd need to perform that on. 

 

I heard someone mention that aside from these 2 switches' individual uplinks to firewall that they'd also put in place a 10GB fiber connection between them too. I'm not sure the benefit in that other than, since traffic is going to go to the firewall as the next hop naturally?

GIdenJoe
Kind of a big deal
Kind of a big deal

So if you don't really have a tiered topology.  So the two switches you mentioned are in fact the only switches in the network or are serving as distribution switches for the rest of the network with firewalls directly attached then you should have a link between them and make sure that link is not blocked by spanning tree.  That means the left switch is then your root bridge and the other switch is a normal bridge and traffic from the other switch flows to the first switch before going to the firewall.  The reason for this is if you have traffic flowing from hosts on the left switch to the other switch that traffic won't pass through the switch internal on the firewall and you can still have your regular design where each firewall has two downlinks one for each switch.

That makes sense, definitely appreciate this.

Should I allow spanning tree to block the firewall uplink on the 2nd switch with the 10GB link between the 2 switches? I'm assuming it's going to want to block something thinking there is a loop? 

GIdenJoe
Kind of a big deal
Kind of a big deal

Yes that is what wanted to bring across.

So still to this day the MX appliances do not support STP and will just transparently forward BPDU's coming from your left switch over to your right switch.

 

So in case you have a higher speed between your switches than up to the MX you will have an STP block on the right switch going to the MX but a root port forwarding to the left switch.

 

If you have identical speeds between your switches and to your MX (like you are using a single link of 1 Gbps between your switches and 1 Gbps upstream to each MX) then you will have to make sure the lower port number goes between your switches and the higher port number to the MX.

In this case for example if you are using 48 port switches you could have 47 and 48 of both switches going upstream to the MX and have 46 between the switches.

amwjoe
Here to help

This helped a lot, thank you everyone!

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels