Anyone doployed non-stacking core switches design?

niroulabh
Getting noticed

Anyone doployed non-stacking core switches design?

Use of management vlan instead of loopback for the Meraki switches brings the spanning tree challenges. As Meraki doesn't use the PVST, I ran into issues If I don't stack the Meraki core switches. How are you all deploying the Meraki core switches? Are you stacking core switches?

12 Replies 12
KarstenI
Kind of a big deal
Kind of a big deal

I only use stacking. But the VRRP setup looks pretty much as we did it in the old HSRP days with the non-stacked Catalysts.

What exactly are your problems?

If you found this post helpful, please give it Kudos. If my answer solves your problem, please click Accept as Solution so others can benefit from it.
niroulabh
Getting noticed

I understand in the Catalyst switches. But my case is whole site is Meraki MS switches from core to distro to access. If i don't stack the core switches and use the same management vlan for all the Meraki switches, half of my uplink ports will not forward the traffic. The reason is Meraki doesn't support the PVST and there will be spanning tree loop on management vlan. So, it will also block the other vlans on the same ports.

PhilipDAth
Kind of a big deal
Kind of a big deal

If there is no IP storage (such as iSCSI) I used stacking.  If storage is involved I either use an additional non-stacked switch or don't stack the switches.  Then when a firmware update comes out, the switches can be rebooted independently.

KarstenI
Kind of a big deal
Kind of a big deal

That’s an important point. If the customer needed iSCSI, in the past I’ve put one way on a non-Meraki switch to be completely independent.

If you found this post helpful, please give it Kudos. If my answer solves your problem, please click Accept as Solution so others can benefit from it.
GIdenJoe
Kind of a big deal
Kind of a big deal

Am I correct to assume in the case you are bringing up you are directly connecting your storage to your distribution switch?  Because you can't have any total outage?
So if you were fully Meraki and deploy 2 "server switches" that are not stacked and attach your storage arrays and servers to each one you wouldn't have the issue because you can individually stagger those access switches in upgrade?

 

In that case that is another reason not to connect your servers directly to your core switching or to use a catalyst core (optionally monitored with Meraki) and use ISSU upgrades?

PhilipDAth
Kind of a big deal
Kind of a big deal

> Because you can't have any total outage?

 

Yes.  Because a storage outage, such as when used with a virtual machine environment, will cause compute outages.  Outages that can't be recovered from without human intervention.

 

>you wouldn't have the issue because you can individually stagger those access switches in upgrade?

 

Correct.  In fact, with storage, I usually create two separate Meraki networks -A and -B, and split the switches across them.  Then you can have different maintenance windows.

 

ISSU only tends to be available with the really expensive switches.  If you have the budget for ISSU, you probably aren't considering Meraki.

 

If the company is big enough to have a "server switch block" then I lean towards using Nexus switches.  IMHO, Catalyst IOS-XE is not reliable enough for companies that want 24x7 operation.  It is just too buggy and has been for a long time.  It was a major contributor towards the failure of the MS390 series.  A lot of the "improvements" in the MS390 series has been to do with getting IOS-XE bugs fixed.

GIdenJoe
Kind of a big deal
Kind of a big deal

Thx for the info!

About the Catalyst switches.  I only once had a bug on a Catalyst switch in 16.11 code with multicast and port-channels.  For the rest I have never encountered any bugs.  So I'm not sure what numerous bugs you are pointing to.  ISSU is possible on the 9500 platforms but if you compare the lower end C9500-16X to the MS425-16 there is not that much of a difference percentagewise in the list price so comparing to the discounted prices that should even be closer.  So ISSU isn't that unattainable 😉  I do think the advantage licensing is a bit much.

PhilipDAth
Kind of a big deal
Kind of a big deal

>For the rest I have never encountered any bugs.

 

Memory leaks, leading to switch crashes after being left running for extended periods of time.

 

I did note this on the 9500X data sheet.

https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9500-series-switches/nb-06-cat95... 

PhilipDAth_0-1687120445775.png

 

GIdenJoe
Kind of a big deal
Kind of a big deal

The 9500X is en entire new ASIC so that feature will probably not be shipped any time soon.  However the only two switches in the 9500X range are a switch with 28x 100 Gbps and  8x 400 Gbps ports and the other with 60x 50 Gbps and 4x 400 Gbps ports.  That lightly to say a little out of scope for our customers cough cough ;p.

 

The scope of 9500 switches I look for is the C9500-16X and the C9500-24Y4C.  And those are still in the UADP chip range so they do support ISSU but I bet it also took a long while before they actually did.

 

Having said that wouldn't it make more sense to emulate a SAN network design wise even if you're using iSCSI.  So having a SAN A and SAN B and then only upgrade SAN A at one time and SAN B at another time?

PhilipDAth
Kind of a big deal
Kind of a big deal

>So having a SAN A and SAN B and then only upgrade SAN A at one time and SAN B at another time?

 

That is exactly what I do.  And if those are separate Meraki networks, and you give them different maintenance windows, you can pretty much "set and forget" the network.

niroulabh
Getting noticed

If you don't stack, how do you avoid the spanning tree issue with Management vlan? Do you use single management vlan for all the Meraki switches in a site?

PhilipDAth
Kind of a big deal
Kind of a big deal

BPDUs are only sent over the native VLAN.  Use the same native VLAN on all inter-switch links, and you won't have an issue.

Get notified when there are additional replies to this discussion.