The Meraki best practices guide states:
Keep the STP diameter under 7 hops, such that packets should not ever have to travel across more than 7 switches to travel from one point of the network to the other
With this in mind, are stacked Meraki switches considered as one single STP hop, or still is it multiple hops based on number of stacked switches?
Would this example topology work?
- MS 425 Core stack with MS225 stacks
- LACP/MLAG Trunk between Core & Access Stacks
- Layer 2 VLANs across all switches, with L3 gateway terminated on Core
Solved! Go to solution.
Yes that would be fine. A stack is considered one STP hop.
Yes that would be fine. A stack is considered one STP hop.
@RaphaelL is exactly right!
You have a nice design, by the way.
If you ask me, the only thing I would change is the number of stack members. If you have enough free ports in your core, it's better if you have several Access Switches stacked in groups of two or four. Doing this would allow you to isolate failures and do maintenance with less downtime / impact (e.g.: when doing firmware upgrades).
Thanks guys, that is reassuring advice.
I've actually got a couple of sites with these stacked MS425 core & MS225 access stacks topology; although the max 8 stack count was just an example above.
On that topic though; largest stack size of I've gone is 6 units. Most are only 3-4 high.
What I also just noticed was this comment here about "Use distributed uplinks across the stack such that they are equidistant".
https://documentation.meraki.com/MS/Meraki_Campus_LAN%3B_Planning%2C_Design_Guidelines_and_Best_Prac...
This is something we hadn't put any thought in to, aside from keeping the uplinks over multiple switches incase a single unit fails.
Is there really that much difference in reality between the "Acceptable" and "Best" above in terms of the uplink positions in the stack?
(Granted MS225 Stacking is 40GBit; LACP 20Gbit uplinks to core) & all these access ports are just office computers; not some sort of high bandwidth server/SAN setup.
This has me thinking if it's worth splitting that 6 high stack as I've got spare fiber so it could be done with some extra optics. Firmware is only ever done on weekends with plenty of possible recovery time - but isolation of possible failures is critical.
That's a very good question, @MikeHunt !
"Is there really that much difference in reality between the "Acceptable" and "Best" above in terms of the uplink positions in the stack?"
That recommendation is focused on resilience / fault-tolerance aspects. The idea here is all other stack members still have a path to Core if any member switch is powered down or doesn't boot. That's why you shouldn't connect your uplinks to the same stack member. This would bring you to the acceptable scenario.
Now the Ideal scenario is the last where any pair can be down and still the remaining switches have an uplink to Core. Like you said, bandwidth and latency aren't a priority here; it's more about fault-tolerance.
Ah right - I hadn't factored a pair of MS225's failing at the same time; since we have a cold MS225 spares onsite ready.
As for stack sizing -
I could get extra optics to split the 6 high stack in to 2 smaller 3 high stacks to minimize any future distribution incurred during stack repairs.
But since these stacks are not hot-swappable anyway - (and that's a disappointing flaw) - realistically that means we'd be waiting for a downtime window anyway regardless of stack size. We won't want to take down any more switches in the repair process during business hours. We'd just have to do a temporary single switch setup/uplink just to get those ports going until we can power off & rebuild the original stack.
Oh yeah, that works!
But think about the firmware upgrades, not just replacing stack members. Smaller stacks allows for more room to staged upgrades.
In any case, what matters most is your use-case. Like you said, this is an access layer deployment, not a high bandwidth server/SAN setup.
I'm not going to answer your question, because others have already done a great job.
I will instead mention that the MS425 went "end of sale" Jun 24, 2024 - so it can no longer be purchased. Instead take a look at the C9300 range. The switch selector is quite good.
The C9300-M is quite popular with a module slot, so you can add a port mix that you like.
https://documentation.meraki.com/MS/MS_Overview_and_Specifications/Catalyst_9300-M_datasheet
@Tony-Sydney-AU , the MS425 should be removed from the switch selector since you can't buy them anymore ...
https://meraki.cisco.com/products/switches/models/
Thank you for bringing this up, @PhilipDAth .
I pinged Internal Teams and they will update the switch selector.
Ah yes, I did notice that whilst researching these recent issues!
I use the unofficial meraki-cli python script to API create all the 100+ L3 core VLAN interfaces & DHCP config. Is that likely to still work?
That would save on re-tooling our workflow.
Otherwise can it be configured directly via cisco CLI?